Mar 17 18:36:15.887621 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 18:36:15.887644 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 18:36:15.887652 kernel: BIOS-provided physical RAM map: Mar 17 18:36:15.887658 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:36:15.887663 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:36:15.887668 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:36:15.887674 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Mar 17 18:36:15.887679 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Mar 17 18:36:15.887686 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:36:15.887691 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 18:36:15.887696 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:36:15.887701 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:36:15.887706 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 18:36:15.887712 kernel: NX (Execute Disable) protection: active Mar 17 18:36:15.887720 kernel: APIC: Static calls initialized Mar 17 18:36:15.887726 kernel: SMBIOS 3.0.0 present. Mar 17 18:36:15.887732 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Mar 17 18:36:15.887737 kernel: Hypervisor detected: KVM Mar 17 18:36:15.887743 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:36:15.887748 kernel: kvm-clock: using sched offset of 3038797435 cycles Mar 17 18:36:15.887754 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:36:15.887759 kernel: tsc: Detected 2445.406 MHz processor Mar 17 18:36:15.887765 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:36:15.887773 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:36:15.887779 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Mar 17 18:36:15.887785 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 18:36:15.887790 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:36:15.887796 kernel: Using GB pages for direct mapping Mar 17 18:36:15.887801 kernel: ACPI: Early table checksum verification disabled Mar 17 18:36:15.887807 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Mar 17 18:36:15.887812 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:36:15.887818 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:36:15.887826 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:36:15.887832 kernel: ACPI: FACS 0x000000007CFE0000 000040 Mar 17 18:36:15.887838 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:36:15.887844 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:36:15.887849 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:36:15.887855 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:36:15.887861 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Mar 17 18:36:15.887867 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Mar 17 18:36:15.887878 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Mar 17 18:36:15.887884 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Mar 17 18:36:15.887890 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Mar 17 18:36:15.887896 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Mar 17 18:36:15.887901 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Mar 17 18:36:15.887907 kernel: No NUMA configuration found Mar 17 18:36:15.887913 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Mar 17 18:36:15.887921 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Mar 17 18:36:15.887927 kernel: Zone ranges: Mar 17 18:36:15.887933 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:36:15.887939 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Mar 17 18:36:15.887945 kernel: Normal empty Mar 17 18:36:15.887951 kernel: Movable zone start for each node Mar 17 18:36:15.887957 kernel: Early memory node ranges Mar 17 18:36:15.887963 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:36:15.887969 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Mar 17 18:36:15.887977 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Mar 17 18:36:15.887983 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:36:15.887989 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:36:15.887994 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 18:36:15.888000 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:36:15.888007 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:36:15.888013 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:36:15.888019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:36:15.888024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:36:15.888032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:36:15.888039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:36:15.888044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:36:15.888050 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:36:15.888056 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:36:15.888062 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:36:15.888068 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 18:36:15.888074 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 18:36:15.888080 kernel: Booting paravirtualized kernel on KVM Mar 17 18:36:15.888088 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:36:15.888094 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:36:15.888100 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 18:36:15.888130 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 18:36:15.888136 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:36:15.888142 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 18:36:15.888149 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 18:36:15.888155 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:36:15.888161 kernel: random: crng init done Mar 17 18:36:15.888170 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:36:15.888176 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:36:15.888182 kernel: Fallback order for Node 0: 0 Mar 17 18:36:15.888187 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Mar 17 18:36:15.888193 kernel: Policy zone: DMA32 Mar 17 18:36:15.888199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:36:15.888205 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 125152K reserved, 0K cma-reserved) Mar 17 18:36:15.888212 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:36:15.888217 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 18:36:15.888226 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 18:36:15.888232 kernel: Dynamic Preempt: voluntary Mar 17 18:36:15.888237 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:36:15.888244 kernel: rcu: RCU event tracing is enabled. Mar 17 18:36:15.888250 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:36:15.888256 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:36:15.888263 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:36:15.888268 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:36:15.888274 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:36:15.888283 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:36:15.888289 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 18:36:15.888295 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 18:36:15.888301 kernel: Console: colour VGA+ 80x25 Mar 17 18:36:15.888306 kernel: printk: console [tty0] enabled Mar 17 18:36:15.888312 kernel: printk: console [ttyS0] enabled Mar 17 18:36:15.888318 kernel: ACPI: Core revision 20230628 Mar 17 18:36:15.888324 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:36:15.888330 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:36:15.888338 kernel: x2apic enabled Mar 17 18:36:15.888344 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 18:36:15.888350 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:36:15.888356 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:36:15.888362 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Mar 17 18:36:15.888368 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 18:36:15.888374 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 18:36:15.888380 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 18:36:15.888394 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:36:15.888401 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:36:15.888407 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:36:15.888413 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:36:15.888422 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 18:36:15.888428 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 18:36:15.888434 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:36:15.888440 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 18:36:15.888447 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 18:36:15.888456 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 18:36:15.888462 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 18:36:15.888469 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:36:15.888475 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:36:15.888481 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:36:15.888487 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:36:15.888494 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 18:36:15.888500 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:36:15.888508 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:36:15.888514 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 18:36:15.888520 kernel: landlock: Up and running. Mar 17 18:36:15.888527 kernel: SELinux: Initializing. Mar 17 18:36:15.888533 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:36:15.888539 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:36:15.888545 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 18:36:15.888552 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:36:15.888558 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:36:15.888566 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:36:15.888573 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 18:36:15.888579 kernel: ... version: 0 Mar 17 18:36:15.888585 kernel: ... bit width: 48 Mar 17 18:36:15.888591 kernel: ... generic registers: 6 Mar 17 18:36:15.888597 kernel: ... value mask: 0000ffffffffffff Mar 17 18:36:15.888612 kernel: ... max period: 00007fffffffffff Mar 17 18:36:15.888618 kernel: ... fixed-purpose events: 0 Mar 17 18:36:15.888624 kernel: ... event mask: 000000000000003f Mar 17 18:36:15.888633 kernel: signal: max sigframe size: 1776 Mar 17 18:36:15.888639 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:36:15.888645 kernel: rcu: Max phase no-delay instances is 400. Mar 17 18:36:15.888651 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:36:15.888658 kernel: smpboot: x86: Booting SMP configuration: Mar 17 18:36:15.888664 kernel: .... node #0, CPUs: #1 Mar 17 18:36:15.888670 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:36:15.888677 kernel: smpboot: Max logical packages: 1 Mar 17 18:36:15.888683 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Mar 17 18:36:15.888691 kernel: devtmpfs: initialized Mar 17 18:36:15.888697 kernel: x86/mm: Memory block size: 128MB Mar 17 18:36:15.888704 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:36:15.888710 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:36:15.888716 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:36:15.888722 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:36:15.888728 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:36:15.888735 kernel: audit: type=2000 audit(1742236575.776:1): state=initialized audit_enabled=0 res=1 Mar 17 18:36:15.888741 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:36:15.888750 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:36:15.888755 kernel: cpuidle: using governor menu Mar 17 18:36:15.888762 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:36:15.888768 kernel: dca service started, version 1.12.1 Mar 17 18:36:15.888774 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 18:36:15.888780 kernel: PCI: Using configuration type 1 for base access Mar 17 18:36:15.888787 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:36:15.888793 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:36:15.888799 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 18:36:15.888808 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:36:15.888814 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 18:36:15.888820 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:36:15.888827 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:36:15.888833 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:36:15.888839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:36:15.888845 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:36:15.888851 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 18:36:15.888857 kernel: ACPI: Interpreter enabled Mar 17 18:36:15.888866 kernel: ACPI: PM: (supports S0 S5) Mar 17 18:36:15.888872 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:36:15.888878 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:36:15.888884 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 18:36:15.888891 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 18:36:15.888897 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:36:15.889061 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:36:15.889255 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 18:36:15.889374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 18:36:15.889384 kernel: PCI host bridge to bus 0000:00 Mar 17 18:36:15.889495 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:36:15.889593 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:36:15.889704 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:36:15.889799 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Mar 17 18:36:15.889893 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:36:15.889992 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 18:36:15.890087 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:36:15.890228 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 18:36:15.890347 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Mar 17 18:36:15.890453 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Mar 17 18:36:15.890558 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Mar 17 18:36:15.890679 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Mar 17 18:36:15.890784 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Mar 17 18:36:15.890888 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:36:15.891001 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.891122 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Mar 17 18:36:15.891247 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.891354 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Mar 17 18:36:15.891472 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.891577 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Mar 17 18:36:15.891706 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.891812 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Mar 17 18:36:15.891923 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.892035 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Mar 17 18:36:15.892170 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.892279 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Mar 17 18:36:15.892391 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.892495 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Mar 17 18:36:15.892620 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.892734 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Mar 17 18:36:15.892873 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 17 18:36:15.892982 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Mar 17 18:36:15.893094 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 18:36:15.893225 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 18:36:15.893342 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 18:36:15.893447 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Mar 17 18:36:15.893555 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Mar 17 18:36:15.893681 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 18:36:15.893786 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 18:36:15.893904 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 18:36:15.894013 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Mar 17 18:36:15.894144 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 17 18:36:15.894260 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Mar 17 18:36:15.894373 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 18:36:15.894476 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 18:36:15.894582 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 18:36:15.894717 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 18:36:15.894829 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Mar 17 18:36:15.894934 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 18:36:15.895644 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 18:36:15.895770 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 18:36:15.895892 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 17 18:36:15.896002 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Mar 17 18:36:15.896126 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Mar 17 18:36:15.896234 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 18:36:15.896338 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 18:36:15.896446 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 18:36:15.896561 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 17 18:36:15.896685 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 17 18:36:15.896791 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 18:36:15.896894 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 18:36:15.896998 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 18:36:15.897132 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 18:36:15.897253 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Mar 17 18:36:15.897364 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Mar 17 18:36:15.897468 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 18:36:15.897572 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 18:36:15.897691 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 18:36:15.897809 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 17 18:36:15.897920 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Mar 17 18:36:15.898030 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Mar 17 18:36:15.898160 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 18:36:15.898266 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 18:36:15.898371 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 18:36:15.898393 kernel: acpiphp: Slot [0] registered Mar 17 18:36:15.898518 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 17 18:36:15.898657 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Mar 17 18:36:15.898769 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Mar 17 18:36:15.901264 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Mar 17 18:36:15.901383 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 18:36:15.901489 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 18:36:15.901594 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 18:36:15.901614 kernel: acpiphp: Slot [0-2] registered Mar 17 18:36:15.901723 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 18:36:15.901827 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 17 18:36:15.901929 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 18:36:15.901943 kernel: acpiphp: Slot [0-3] registered Mar 17 18:36:15.902048 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 18:36:15.902173 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 18:36:15.902279 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 18:36:15.902289 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:36:15.902295 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:36:15.902302 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:36:15.902308 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:36:15.902315 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 18:36:15.902325 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 18:36:15.902331 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 18:36:15.902338 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 18:36:15.902345 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 18:36:15.902351 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 18:36:15.902383 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 18:36:15.902403 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 18:36:15.902410 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 18:36:15.902417 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 18:36:15.902426 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 18:36:15.902432 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 18:36:15.902439 kernel: iommu: Default domain type: Translated Mar 17 18:36:15.902445 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:36:15.902451 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:36:15.902458 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:36:15.902464 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:36:15.902470 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Mar 17 18:36:15.902588 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 18:36:15.902716 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 18:36:15.902822 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:36:15.902832 kernel: vgaarb: loaded Mar 17 18:36:15.902838 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:36:15.902845 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:36:15.902851 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:36:15.902858 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:36:15.902864 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:36:15.902871 kernel: pnp: PnP ACPI init Mar 17 18:36:15.902992 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 18:36:15.903003 kernel: pnp: PnP ACPI: found 5 devices Mar 17 18:36:15.903009 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:36:15.903016 kernel: NET: Registered PF_INET protocol family Mar 17 18:36:15.903022 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:36:15.903028 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 18:36:15.903035 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:36:15.903041 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:36:15.903051 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 18:36:15.903058 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 18:36:15.903064 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:36:15.903070 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:36:15.903077 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:36:15.903083 kernel: NET: Registered PF_XDP protocol family Mar 17 18:36:15.905276 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 18:36:15.905393 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 18:36:15.905506 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 18:36:15.905645 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 18:36:15.905754 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 18:36:15.905858 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 18:36:15.905963 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 17 18:36:15.906066 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 18:36:15.907008 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 18:36:15.907158 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 17 18:36:15.907276 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 18:36:15.907384 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 18:36:15.907500 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 17 18:36:15.907626 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 18:36:15.907734 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 18:36:15.907840 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 17 18:36:15.907950 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 18:36:15.908072 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 18:36:15.908206 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 17 18:36:15.908314 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 18:36:15.908416 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 18:36:15.908522 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 17 18:36:15.908639 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 18:36:15.908744 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 18:36:15.908847 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 17 18:36:15.908951 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Mar 17 18:36:15.909060 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 18:36:15.911216 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 18:36:15.911339 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 17 18:36:15.911444 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Mar 17 18:36:15.911547 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Mar 17 18:36:15.911668 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 18:36:15.911778 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 17 18:36:15.911881 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Mar 17 18:36:15.911990 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 18:36:15.912094 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 18:36:15.913262 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:36:15.913374 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:36:15.913515 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:36:15.913644 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Mar 17 18:36:15.913772 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 18:36:15.913900 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 18:36:15.914042 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 17 18:36:15.915182 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Mar 17 18:36:15.915356 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 17 18:36:15.915472 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 18:36:15.915582 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 17 18:36:15.915696 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 18:36:15.915805 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 17 18:36:15.915905 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 18:36:15.916016 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 17 18:36:15.917162 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 18:36:15.917328 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Mar 17 18:36:15.917437 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 18:36:15.917545 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Mar 17 18:36:15.917659 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 17 18:36:15.917766 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 18:36:15.917873 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Mar 17 18:36:15.917973 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Mar 17 18:36:15.918073 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 18:36:15.919255 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Mar 17 18:36:15.919365 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 17 18:36:15.919465 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 18:36:15.919481 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 18:36:15.919488 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:36:15.919495 kernel: Initialise system trusted keyrings Mar 17 18:36:15.919502 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 18:36:15.919509 kernel: Key type asymmetric registered Mar 17 18:36:15.919516 kernel: Asymmetric key parser 'x509' registered Mar 17 18:36:15.919522 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 18:36:15.919529 kernel: io scheduler mq-deadline registered Mar 17 18:36:15.919535 kernel: io scheduler kyber registered Mar 17 18:36:15.919542 kernel: io scheduler bfq registered Mar 17 18:36:15.919669 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 17 18:36:15.919778 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 17 18:36:15.919885 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 17 18:36:15.919989 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 17 18:36:15.920095 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 17 18:36:15.921252 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 17 18:36:15.921364 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 17 18:36:15.921470 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 17 18:36:15.921580 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 17 18:36:15.921696 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 17 18:36:15.921803 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 17 18:36:15.921908 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 17 18:36:15.922014 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 17 18:36:15.923180 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 17 18:36:15.923304 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 17 18:36:15.923410 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 17 18:36:15.923426 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 18:36:15.923530 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Mar 17 18:36:15.923648 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Mar 17 18:36:15.923659 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:36:15.923666 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Mar 17 18:36:15.923673 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:36:15.923680 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:36:15.923687 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:36:15.923694 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:36:15.923705 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:36:15.923711 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:36:15.923824 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 18:36:15.923923 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 18:36:15.924026 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T18:36:15 UTC (1742236575) Mar 17 18:36:15.925179 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 18:36:15.925192 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 18:36:15.925199 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:36:15.925210 kernel: Segment Routing with IPv6 Mar 17 18:36:15.925218 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:36:15.925225 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:36:15.925231 kernel: Key type dns_resolver registered Mar 17 18:36:15.925239 kernel: IPI shorthand broadcast: enabled Mar 17 18:36:15.925245 kernel: sched_clock: Marking stable (1097007196, 133057326)->(1239356209, -9291687) Mar 17 18:36:15.925252 kernel: registered taskstats version 1 Mar 17 18:36:15.925259 kernel: Loading compiled-in X.509 certificates Mar 17 18:36:15.925266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 18:36:15.925275 kernel: Key type .fscrypt registered Mar 17 18:36:15.925281 kernel: Key type fscrypt-provisioning registered Mar 17 18:36:15.925288 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:36:15.925297 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:36:15.925303 kernel: ima: No architecture policies found Mar 17 18:36:15.925310 kernel: clk: Disabling unused clocks Mar 17 18:36:15.925316 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 18:36:15.925323 kernel: Write protecting the kernel read-only data: 36864k Mar 17 18:36:15.925332 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 18:36:15.925339 kernel: Run /init as init process Mar 17 18:36:15.925346 kernel: with arguments: Mar 17 18:36:15.925352 kernel: /init Mar 17 18:36:15.925359 kernel: with environment: Mar 17 18:36:15.925365 kernel: HOME=/ Mar 17 18:36:15.925372 kernel: TERM=linux Mar 17 18:36:15.925378 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:36:15.925387 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 18:36:15.925399 systemd[1]: Detected virtualization kvm. Mar 17 18:36:15.925407 systemd[1]: Detected architecture x86-64. Mar 17 18:36:15.925413 systemd[1]: Running in initrd. Mar 17 18:36:15.925420 systemd[1]: No hostname configured, using default hostname. Mar 17 18:36:15.925427 systemd[1]: Hostname set to . Mar 17 18:36:15.925434 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:36:15.925441 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:36:15.925448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:36:15.925457 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:36:15.925465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 18:36:15.925472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 18:36:15.925480 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 18:36:15.925487 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 18:36:15.925496 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 18:36:15.925505 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 18:36:15.925512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:36:15.925519 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:36:15.925526 systemd[1]: Reached target paths.target - Path Units. Mar 17 18:36:15.925534 systemd[1]: Reached target slices.target - Slice Units. Mar 17 18:36:15.925541 systemd[1]: Reached target swap.target - Swaps. Mar 17 18:36:15.925548 systemd[1]: Reached target timers.target - Timer Units. Mar 17 18:36:15.925555 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 18:36:15.925562 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 18:36:15.925571 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 18:36:15.925578 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 18:36:15.925585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:36:15.925592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 18:36:15.925616 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:36:15.925624 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 18:36:15.925631 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 18:36:15.925638 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 18:36:15.925648 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 18:36:15.925655 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:36:15.925662 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 18:36:15.925669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 18:36:15.925676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:36:15.925683 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 18:36:15.925713 systemd-journald[188]: Collecting audit messages is disabled. Mar 17 18:36:15.925734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:36:15.925742 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:36:15.925750 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 18:36:15.925759 systemd-journald[188]: Journal started Mar 17 18:36:15.925775 systemd-journald[188]: Runtime Journal (/run/log/journal/4f0ad93fe1be482dbc6c1d1150898fdc) is 4.8M, max 38.4M, 33.6M free. Mar 17 18:36:15.900310 systemd-modules-load[189]: Inserted module 'overlay' Mar 17 18:36:15.961880 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 18:36:15.961916 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:36:15.961929 kernel: Bridge firewalling registered Mar 17 18:36:15.935187 systemd-modules-load[189]: Inserted module 'br_netfilter' Mar 17 18:36:15.961780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 18:36:15.962441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:36:15.963442 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 18:36:15.972246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:36:15.974237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:36:15.981227 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 18:36:15.983667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 18:36:15.991045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:36:15.994559 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 18:36:15.997062 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:36:16.007571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:36:16.009248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:36:16.016679 dracut-cmdline[216]: dracut-dracut-053 Mar 17 18:36:16.018347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 18:36:16.022026 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 18:36:16.050566 systemd-resolved[225]: Positive Trust Anchors: Mar 17 18:36:16.050580 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:36:16.050619 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 18:36:16.056321 systemd-resolved[225]: Defaulting to hostname 'linux'. Mar 17 18:36:16.057293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 18:36:16.058059 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:36:16.090162 kernel: SCSI subsystem initialized Mar 17 18:36:16.099141 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:36:16.109139 kernel: iscsi: registered transport (tcp) Mar 17 18:36:16.127250 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:36:16.127333 kernel: QLogic iSCSI HBA Driver Mar 17 18:36:16.175355 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 18:36:16.182251 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 18:36:16.205153 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:36:16.205222 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:36:16.208153 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 18:36:16.248142 kernel: raid6: avx2x4 gen() 32179 MB/s Mar 17 18:36:16.266192 kernel: raid6: avx2x2 gen() 27861 MB/s Mar 17 18:36:16.283292 kernel: raid6: avx2x1 gen() 21801 MB/s Mar 17 18:36:16.283366 kernel: raid6: using algorithm avx2x4 gen() 32179 MB/s Mar 17 18:36:16.301382 kernel: raid6: .... xor() 4390 MB/s, rmw enabled Mar 17 18:36:16.301456 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:36:16.321189 kernel: xor: automatically using best checksumming function avx Mar 17 18:36:16.451156 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 18:36:16.464294 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 18:36:16.469289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:36:16.483095 systemd-udevd[405]: Using default interface naming scheme 'v255'. Mar 17 18:36:16.486942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:36:16.496281 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 18:36:16.511500 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Mar 17 18:36:16.542300 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 18:36:16.548235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 18:36:16.613872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:36:16.620326 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 18:36:16.633902 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 18:36:16.639214 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 18:36:16.639723 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:36:16.640810 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 18:36:16.647639 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 18:36:16.660615 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 18:36:16.699125 kernel: scsi host0: Virtio SCSI HBA Mar 17 18:36:16.710133 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 17 18:36:16.723290 kernel: ACPI: bus type USB registered Mar 17 18:36:16.723316 kernel: usbcore: registered new interface driver usbfs Mar 17 18:36:16.726197 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:36:16.733373 kernel: usbcore: registered new interface driver hub Mar 17 18:36:16.733396 kernel: usbcore: registered new device driver usb Mar 17 18:36:16.736357 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:36:16.736479 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:36:16.739810 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:36:16.741674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:36:16.741860 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:36:16.742517 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:36:16.770652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:36:16.779138 kernel: libata version 3.00 loaded. Mar 17 18:36:16.786082 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 18:36:16.822874 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 18:36:16.822891 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:36:16.822900 kernel: AES CTR mode by8 optimization enabled Mar 17 18:36:16.822908 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 18:36:16.823052 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 18:36:16.823221 kernel: scsi host1: ahci Mar 17 18:36:16.823372 kernel: scsi host2: ahci Mar 17 18:36:16.823503 kernel: scsi host3: ahci Mar 17 18:36:16.823645 kernel: scsi host4: ahci Mar 17 18:36:16.823769 kernel: scsi host5: ahci Mar 17 18:36:16.823898 kernel: scsi host6: ahci Mar 17 18:36:16.824022 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Mar 17 18:36:16.824032 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Mar 17 18:36:16.824042 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Mar 17 18:36:16.824051 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Mar 17 18:36:16.824059 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Mar 17 18:36:16.824067 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Mar 17 18:36:16.863894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:36:16.870263 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:36:16.882921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:36:17.138143 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 18:36:17.138249 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 18:36:17.138273 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 18:36:17.138292 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 18:36:17.138309 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 17 18:36:17.140149 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 18:36:17.142574 kernel: ata1.00: applying bridge limits Mar 17 18:36:17.145327 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 18:36:17.146139 kernel: ata1.00: configured for UDMA/100 Mar 17 18:36:17.148384 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:36:17.176141 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 18:36:17.199130 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 17 18:36:17.199367 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 18:36:17.199602 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 17 18:36:17.199809 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 17 18:36:17.200008 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 17 18:36:17.200234 kernel: hub 1-0:1.0: USB hub found Mar 17 18:36:17.200468 kernel: hub 1-0:1.0: 4 ports detected Mar 17 18:36:17.200690 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 18:36:17.200915 kernel: hub 2-0:1.0: USB hub found Mar 17 18:36:17.201207 kernel: hub 2-0:1.0: 4 ports detected Mar 17 18:36:17.201423 kernel: sd 0:0:0:0: Power-on or device reset occurred Mar 17 18:36:17.219601 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 17 18:36:17.219839 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 18:36:17.220051 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Mar 17 18:36:17.220302 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 17 18:36:17.220527 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 18:36:17.224647 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:36:17.224667 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:36:17.224681 kernel: GPT:17805311 != 80003071 Mar 17 18:36:17.224694 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:36:17.224708 kernel: GPT:17805311 != 80003071 Mar 17 18:36:17.224720 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:36:17.224733 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:36:17.224746 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 18:36:17.224981 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:36:17.267155 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (461) Mar 17 18:36:17.275151 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (451) Mar 17 18:36:17.278189 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 17 18:36:17.284135 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 17 18:36:17.292339 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 17 18:36:17.293461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 17 18:36:17.299382 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 18:36:17.307355 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 18:36:17.316124 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:36:17.317443 disk-uuid[574]: Primary Header is updated. Mar 17 18:36:17.317443 disk-uuid[574]: Secondary Entries is updated. Mar 17 18:36:17.317443 disk-uuid[574]: Secondary Header is updated. Mar 17 18:36:17.442142 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 18:36:17.579303 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:36:17.584289 kernel: usbcore: registered new interface driver usbhid Mar 17 18:36:17.584334 kernel: usbhid: USB HID core driver Mar 17 18:36:17.591251 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 17 18:36:17.591292 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 17 18:36:18.332190 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 18:36:18.333425 disk-uuid[575]: The operation has completed successfully. Mar 17 18:36:18.388308 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:36:18.388422 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 18:36:18.402237 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 18:36:18.413236 sh[591]: Success Mar 17 18:36:18.425553 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 18:36:18.469913 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 18:36:18.482549 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 18:36:18.483276 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 18:36:18.503021 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 18:36:18.503071 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:36:18.503082 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 18:36:18.505939 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 18:36:18.505967 kernel: BTRFS info (device dm-0): using free space tree Mar 17 18:36:18.516134 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 18:36:18.518541 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 18:36:18.519797 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 18:36:18.527278 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 18:36:18.530245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 18:36:18.541136 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 18:36:18.543843 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:36:18.543876 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:36:18.548940 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:36:18.548964 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 18:36:18.557667 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:36:18.559764 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 18:36:18.566722 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 18:36:18.573293 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 18:36:18.656364 ignition[685]: Ignition 2.20.0 Mar 17 18:36:18.656380 ignition[685]: Stage: fetch-offline Mar 17 18:36:18.657363 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 18:36:18.656411 ignition[685]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:36:18.656432 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:36:18.659521 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 18:36:18.656509 ignition[685]: parsed url from cmdline: "" Mar 17 18:36:18.656513 ignition[685]: no config URL provided Mar 17 18:36:18.656518 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:36:18.656526 ignition[685]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:36:18.656533 ignition[685]: failed to fetch config: resource requires networking Mar 17 18:36:18.656705 ignition[685]: Ignition finished successfully Mar 17 18:36:18.666278 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 18:36:18.687659 systemd-networkd[779]: lo: Link UP Mar 17 18:36:18.687671 systemd-networkd[779]: lo: Gained carrier Mar 17 18:36:18.690009 systemd-networkd[779]: Enumeration completed Mar 17 18:36:18.690214 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 18:36:18.690852 systemd[1]: Reached target network.target - Network. Mar 17 18:36:18.691295 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:18.691300 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:36:18.694690 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:18.694693 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:36:18.696548 systemd-networkd[779]: eth0: Link UP Mar 17 18:36:18.696552 systemd-networkd[779]: eth0: Gained carrier Mar 17 18:36:18.696558 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:18.700293 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 18:36:18.701342 systemd-networkd[779]: eth1: Link UP Mar 17 18:36:18.701345 systemd-networkd[779]: eth1: Gained carrier Mar 17 18:36:18.701352 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:18.710259 ignition[781]: Ignition 2.20.0 Mar 17 18:36:18.710272 ignition[781]: Stage: fetch Mar 17 18:36:18.710434 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:36:18.710445 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:36:18.710525 ignition[781]: parsed url from cmdline: "" Mar 17 18:36:18.710529 ignition[781]: no config URL provided Mar 17 18:36:18.710534 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:36:18.710542 ignition[781]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:36:18.710561 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 17 18:36:18.710699 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 17 18:36:18.727152 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:36:18.766185 systemd-networkd[779]: eth0: DHCPv4 address 37.27.32.129/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 18:36:18.911146 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 17 18:36:18.915489 ignition[781]: GET result: OK Mar 17 18:36:18.915551 ignition[781]: parsing config with SHA512: 518c65e6a0f3eec75b224122839c1651903f9f7a90ff8e4d5df9345574b988d830897f99d94fc16fedef6bce1f18b6a4cb74def112612ccf97f85a4d0f16a042 Mar 17 18:36:18.918909 unknown[781]: fetched base config from "system" Mar 17 18:36:18.919254 ignition[781]: fetch: fetch complete Mar 17 18:36:18.918927 unknown[781]: fetched base config from "system" Mar 17 18:36:18.919264 ignition[781]: fetch: fetch passed Mar 17 18:36:18.918935 unknown[781]: fetched user config from "hetzner" Mar 17 18:36:18.919315 ignition[781]: Ignition finished successfully Mar 17 18:36:18.922982 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 18:36:18.930281 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 18:36:18.945418 ignition[788]: Ignition 2.20.0 Mar 17 18:36:18.945438 ignition[788]: Stage: kargs Mar 17 18:36:18.945647 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:36:18.945661 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:36:18.946332 ignition[788]: kargs: kargs passed Mar 17 18:36:18.948645 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 18:36:18.946382 ignition[788]: Ignition finished successfully Mar 17 18:36:18.959319 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 18:36:18.970922 ignition[795]: Ignition 2.20.0 Mar 17 18:36:18.970935 ignition[795]: Stage: disks Mar 17 18:36:18.971090 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:36:18.971101 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:36:18.973034 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 18:36:18.971661 ignition[795]: disks: disks passed Mar 17 18:36:18.974500 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 18:36:18.971699 ignition[795]: Ignition finished successfully Mar 17 18:36:18.977218 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 18:36:18.977919 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 18:36:18.979127 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 18:36:18.980123 systemd[1]: Reached target basic.target - Basic System. Mar 17 18:36:18.986386 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 18:36:19.003205 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 18:36:19.006987 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 18:36:19.012264 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 18:36:19.093445 kernel: EXT4-fs (sda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 18:36:19.093904 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 18:36:19.094891 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 18:36:19.106203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 18:36:19.108875 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 18:36:19.111016 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 18:36:19.113821 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:36:19.114787 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 18:36:19.120537 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Mar 17 18:36:19.120594 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 18:36:19.125621 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:36:19.125650 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:36:19.125171 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 18:36:19.126815 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 18:36:19.133893 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:36:19.133916 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 18:36:19.139939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 18:36:19.177423 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:36:19.178283 coreos-metadata[814]: Mar 17 18:36:19.178 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 17 18:36:19.181313 coreos-metadata[814]: Mar 17 18:36:19.180 INFO Fetch successful Mar 17 18:36:19.182291 coreos-metadata[814]: Mar 17 18:36:19.182 INFO wrote hostname ci-4152-2-2-f-0ddc29a7f8 to /sysroot/etc/hostname Mar 17 18:36:19.186150 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:36:19.185713 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 18:36:19.190746 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:36:19.195336 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:36:19.282744 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 18:36:19.289195 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 18:36:19.291250 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 18:36:19.300158 kernel: BTRFS info (device sda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 18:36:19.320938 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 18:36:19.326775 ignition[929]: INFO : Ignition 2.20.0 Mar 17 18:36:19.326775 ignition[929]: INFO : Stage: mount Mar 17 18:36:19.328487 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:36:19.328487 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:36:19.328487 ignition[929]: INFO : mount: mount passed Mar 17 18:36:19.328487 ignition[929]: INFO : Ignition finished successfully Mar 17 18:36:19.328775 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 18:36:19.336221 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 18:36:19.500624 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 18:36:19.511303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 18:36:19.526169 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (941) Mar 17 18:36:19.531360 kernel: BTRFS info (device sda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 18:36:19.531416 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:36:19.534136 kernel: BTRFS info (device sda6): using free space tree Mar 17 18:36:19.540396 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 17 18:36:19.540473 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 18:36:19.545428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 18:36:19.576430 ignition[958]: INFO : Ignition 2.20.0 Mar 17 18:36:19.576430 ignition[958]: INFO : Stage: files Mar 17 18:36:19.577909 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:36:19.577909 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:36:19.579683 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:36:19.579683 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:36:19.579683 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:36:19.584474 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:36:19.585310 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:36:19.586244 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:36:19.585539 unknown[958]: wrote ssh authorized keys file for user: core Mar 17 18:36:19.587997 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:36:19.588913 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:36:19.589849 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:36:19.589849 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:36:19.589849 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:36:19.589849 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:36:19.589849 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:36:19.595416 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 18:36:19.860387 systemd-networkd[779]: eth0: Gained IPv6LL Mar 17 18:36:20.052365 systemd-networkd[779]: eth1: Gained IPv6LL Mar 17 18:36:20.318133 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 18:36:20.638162 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:36:20.638162 ignition[958]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 17 18:36:20.640724 ignition[958]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 18:36:20.643411 ignition[958]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 17 18:36:20.643411 ignition[958]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 17 18:36:20.643411 ignition[958]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:36:20.643411 ignition[958]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:36:20.643411 ignition[958]: INFO : files: files passed Mar 17 18:36:20.643411 ignition[958]: INFO : Ignition finished successfully Mar 17 18:36:20.644092 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 18:36:20.654378 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 18:36:20.657492 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 18:36:20.658635 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:36:20.658731 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 18:36:20.671195 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:36:20.671195 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:36:20.673708 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:36:20.676449 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 18:36:20.677806 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 18:36:20.686241 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 18:36:20.710202 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:36:20.710339 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 18:36:20.711857 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 18:36:20.713378 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 18:36:20.713909 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 18:36:20.715722 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 18:36:20.738855 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 18:36:20.745239 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 18:36:20.755717 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:36:20.756461 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:36:20.757556 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 18:36:20.758568 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:36:20.758679 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 18:36:20.759866 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 18:36:20.760526 systemd[1]: Stopped target basic.target - Basic System. Mar 17 18:36:20.761546 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 18:36:20.762469 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 18:36:20.763404 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 18:36:20.764452 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 18:36:20.765502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 18:36:20.766571 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 18:36:20.767583 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 18:36:20.768630 systemd[1]: Stopped target swap.target - Swaps. Mar 17 18:36:20.769615 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:36:20.769718 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 18:36:20.770840 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:36:20.771532 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:36:20.772626 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 18:36:20.774271 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:36:20.775068 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:36:20.775180 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 18:36:20.776572 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:36:20.776673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 18:36:20.777397 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:36:20.777487 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 18:36:20.778304 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:36:20.778395 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 18:36:20.790915 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 18:36:20.794301 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 18:36:20.794784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:36:20.794930 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:36:20.796222 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:36:20.796376 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 18:36:20.802608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:36:20.802722 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 18:36:20.810769 ignition[1011]: INFO : Ignition 2.20.0 Mar 17 18:36:20.812170 ignition[1011]: INFO : Stage: umount Mar 17 18:36:20.812170 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:36:20.812170 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 17 18:36:20.815075 ignition[1011]: INFO : umount: umount passed Mar 17 18:36:20.815075 ignition[1011]: INFO : Ignition finished successfully Mar 17 18:36:20.816474 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:36:20.816595 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 18:36:20.820241 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:36:20.820349 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 18:36:20.820888 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:36:20.820933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 18:36:20.821800 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:36:20.821843 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 18:36:20.822959 systemd[1]: Stopped target network.target - Network. Mar 17 18:36:20.824300 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:36:20.824354 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 18:36:20.827848 systemd[1]: Stopped target paths.target - Path Units. Mar 17 18:36:20.828851 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:36:20.832211 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:36:20.832784 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 18:36:20.833869 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 18:36:20.834811 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:36:20.834863 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 18:36:20.839082 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:36:20.839170 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 18:36:20.839950 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:36:20.840009 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 18:36:20.840911 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 18:36:20.840957 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 18:36:20.841995 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 18:36:20.842903 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 18:36:20.845152 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:36:20.845711 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:36:20.845826 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 18:36:20.847411 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:36:20.847493 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 18:36:20.848203 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 17 18:36:20.849821 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:36:20.849940 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 18:36:20.853237 systemd-networkd[779]: eth1: DHCPv6 lease lost Mar 17 18:36:20.854816 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 18:36:20.854916 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:36:20.856098 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:36:20.856312 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 18:36:20.857534 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:36:20.857603 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:36:20.865223 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 18:36:20.865679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:36:20.865751 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 18:36:20.866312 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:36:20.866367 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:36:20.866834 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:36:20.866881 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 18:36:20.868307 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:36:20.881310 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:36:20.881480 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:36:20.882267 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:36:20.882329 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 18:36:20.882997 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:36:20.883035 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:36:20.883886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:36:20.883936 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 18:36:20.885453 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:36:20.885500 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 18:36:20.886561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:36:20.886608 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:36:20.898360 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 18:36:20.898852 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:36:20.898926 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:36:20.899479 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 18:36:20.899533 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 18:36:20.900033 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:36:20.900081 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:36:20.903206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:36:20.903262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:36:20.904172 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:36:20.904287 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 18:36:20.904948 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:36:20.905040 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 18:36:20.906927 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 18:36:20.913332 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 18:36:20.921388 systemd[1]: Switching root. Mar 17 18:36:20.954627 systemd-journald[188]: Journal stopped Mar 17 18:36:21.875421 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Mar 17 18:36:21.875485 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:36:21.875503 kernel: SELinux: policy capability open_perms=1 Mar 17 18:36:21.875517 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:36:21.875526 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:36:21.875540 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:36:21.875550 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:36:21.875564 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:36:21.875578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:36:21.875588 kernel: audit: type=1403 audit(1742236581.103:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:36:21.875598 systemd[1]: Successfully loaded SELinux policy in 44.814ms. Mar 17 18:36:21.875615 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.814ms. Mar 17 18:36:21.875628 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 18:36:21.875639 systemd[1]: Detected virtualization kvm. Mar 17 18:36:21.875654 systemd[1]: Detected architecture x86-64. Mar 17 18:36:21.875663 systemd[1]: Detected first boot. Mar 17 18:36:21.875674 systemd[1]: Hostname set to . Mar 17 18:36:21.875684 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:36:21.875694 zram_generator::config[1054]: No configuration found. Mar 17 18:36:21.875706 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:36:21.875719 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:36:21.875729 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 18:36:21.875744 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:36:21.875754 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 18:36:21.875765 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 18:36:21.875775 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 18:36:21.875784 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 18:36:21.875794 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 18:36:21.875808 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 18:36:21.875818 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 18:36:21.875828 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 18:36:21.875838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:36:21.875848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:36:21.875858 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 18:36:21.875869 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 18:36:21.875879 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 18:36:21.875889 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 18:36:21.875901 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 18:36:21.875911 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:36:21.875921 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 18:36:21.875931 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 18:36:21.875942 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 18:36:21.875952 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 18:36:21.875964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:36:21.875975 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 18:36:21.875985 systemd[1]: Reached target slices.target - Slice Units. Mar 17 18:36:21.875995 systemd[1]: Reached target swap.target - Swaps. Mar 17 18:36:21.876005 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 18:36:21.876015 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 18:36:21.876025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:36:21.876035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 18:36:21.876045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:36:21.876054 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 18:36:21.876067 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 18:36:21.876077 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 18:36:21.876087 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 18:36:21.876096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:21.876270 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 18:36:21.876288 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 18:36:21.876299 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 18:36:21.876309 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:36:21.876324 systemd[1]: Reached target machines.target - Containers. Mar 17 18:36:21.876334 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 18:36:21.876344 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:36:21.876354 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 18:36:21.876364 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 18:36:21.876374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:36:21.876384 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 18:36:21.876394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:36:21.876404 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 18:36:21.876417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:36:21.876432 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:36:21.876444 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:36:21.876455 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 18:36:21.876465 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:36:21.877144 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:36:21.877161 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 18:36:21.877172 kernel: loop: module loaded Mar 17 18:36:21.877183 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 18:36:21.877193 kernel: fuse: init (API version 7.39) Mar 17 18:36:21.877203 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 18:36:21.877243 systemd-journald[1130]: Collecting audit messages is disabled. Mar 17 18:36:21.877264 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 18:36:21.877279 systemd-journald[1130]: Journal started Mar 17 18:36:21.877298 systemd-journald[1130]: Runtime Journal (/run/log/journal/4f0ad93fe1be482dbc6c1d1150898fdc) is 4.8M, max 38.4M, 33.6M free. Mar 17 18:36:21.633150 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:36:21.653851 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 18:36:21.654459 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:36:21.881155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 18:36:21.884160 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:36:21.884191 systemd[1]: Stopped verity-setup.service. Mar 17 18:36:21.889147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:21.900253 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 18:36:21.896876 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 18:36:21.899275 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 18:36:21.899832 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 18:36:21.900973 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 18:36:21.901562 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 18:36:21.902129 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 18:36:21.905398 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 18:36:21.906816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:36:21.907603 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:36:21.907755 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 18:36:21.908534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:21.908684 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:36:21.909674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:21.909812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:36:21.910809 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:36:21.911016 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 18:36:21.911806 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:21.911959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:36:21.912768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 18:36:21.914376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 18:36:21.915402 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 18:36:21.929180 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 18:36:21.933151 kernel: ACPI: bus type drm_connector registered Mar 17 18:36:21.939862 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 18:36:21.942196 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 18:36:21.942700 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:36:21.942724 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 18:36:21.943947 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 18:36:21.953269 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 18:36:21.957213 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 18:36:21.957785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:36:21.966694 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 18:36:21.973287 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 18:36:21.973907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:36:21.975386 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 18:36:21.977731 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 18:36:21.979277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:36:21.982566 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 18:36:21.985283 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 18:36:21.994883 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:36:21.995083 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 18:36:22.015038 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 18:36:22.016340 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 18:36:22.017987 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 18:36:22.030301 systemd-journald[1130]: Time spent on flushing to /var/log/journal/4f0ad93fe1be482dbc6c1d1150898fdc is 54.642ms for 1120 entries. Mar 17 18:36:22.030301 systemd-journald[1130]: System Journal (/var/log/journal/4f0ad93fe1be482dbc6c1d1150898fdc) is 8.0M, max 584.8M, 576.8M free. Mar 17 18:36:22.117834 systemd-journald[1130]: Received client request to flush runtime journal. Mar 17 18:36:22.118322 kernel: loop0: detected capacity change from 0 to 138184 Mar 17 18:36:22.118352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:36:22.118370 kernel: loop1: detected capacity change from 0 to 140992 Mar 17 18:36:22.066184 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 18:36:22.069520 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 18:36:22.078786 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 18:36:22.095557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:36:22.102222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:36:22.109544 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 18:36:22.121602 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 17 18:36:22.121620 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 17 18:36:22.125340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 18:36:22.133427 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 18:36:22.145328 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 18:36:22.147589 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:36:22.148765 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 18:36:22.164891 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:36:22.179310 kernel: loop2: detected capacity change from 0 to 205544 Mar 17 18:36:22.200391 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 18:36:22.208675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 18:36:22.234638 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 17 18:36:22.234655 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 17 18:36:22.244156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:36:22.246242 kernel: loop3: detected capacity change from 0 to 8 Mar 17 18:36:22.268135 kernel: loop4: detected capacity change from 0 to 138184 Mar 17 18:36:22.287607 kernel: loop5: detected capacity change from 0 to 140992 Mar 17 18:36:22.311638 kernel: loop6: detected capacity change from 0 to 205544 Mar 17 18:36:22.341191 kernel: loop7: detected capacity change from 0 to 8 Mar 17 18:36:22.343285 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 17 18:36:22.345042 (sd-merge)[1202]: Merged extensions into '/usr'. Mar 17 18:36:22.349451 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 18:36:22.349559 systemd[1]: Reloading... Mar 17 18:36:22.426185 zram_generator::config[1231]: No configuration found. Mar 17 18:36:22.548137 ldconfig[1168]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:36:22.554790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:22.595289 systemd[1]: Reloading finished in 245 ms. Mar 17 18:36:22.620429 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 18:36:22.623051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 18:36:22.632286 systemd[1]: Starting ensure-sysext.service... Mar 17 18:36:22.635267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 18:36:22.639517 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 18:36:22.647262 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:36:22.651326 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Mar 17 18:36:22.651340 systemd[1]: Reloading... Mar 17 18:36:22.662323 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:36:22.662984 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 18:36:22.663908 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:36:22.664283 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Mar 17 18:36:22.664400 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Mar 17 18:36:22.667703 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 18:36:22.667779 systemd-tmpfiles[1272]: Skipping /boot Mar 17 18:36:22.681879 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 18:36:22.681977 systemd-tmpfiles[1272]: Skipping /boot Mar 17 18:36:22.688313 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Mar 17 18:36:22.731139 zram_generator::config[1301]: No configuration found. Mar 17 18:36:22.875714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:22.882313 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 18:36:22.895122 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:36:22.922133 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:36:22.939392 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 18:36:22.939743 systemd[1]: Reloading finished in 288 ms. Mar 17 18:36:22.961720 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:36:22.964166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:36:22.979148 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1328) Mar 17 18:36:22.987958 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 17 18:36:22.997285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:23.005190 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:36:23.009833 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 18:36:23.010457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:36:23.019126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:36:23.021760 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:36:23.031347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:36:23.032149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:36:23.036180 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 18:36:23.041399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 18:36:23.042460 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 18:36:23.054717 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 18:36:23.074182 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 18:36:23.074682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:23.077502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:23.077674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:36:23.078617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:23.078978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:36:23.082803 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:23.084153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:36:23.093344 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Mar 17 18:36:23.093406 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Mar 17 18:36:23.097426 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:36:23.098947 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 18:36:23.098993 kernel: [drm] features: -context_init Mar 17 18:36:23.101135 kernel: [drm] number of scanouts: 1 Mar 17 18:36:23.101188 kernel: [drm] number of cap sets: 0 Mar 17 18:36:23.102163 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 17 18:36:23.103781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:23.103945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:36:23.113466 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 18:36:23.113522 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 18:36:23.113443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:36:23.127152 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 18:36:23.131337 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:36:23.145344 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 18:36:23.145625 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 18:36:23.145873 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 18:36:23.141286 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:36:23.162550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:36:23.165783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:36:23.170339 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 18:36:23.170422 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:23.171453 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 18:36:23.174241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:23.174426 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:36:23.174923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:23.175071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:36:23.200239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 17 18:36:23.204637 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:23.204804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:36:23.213525 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 18:36:23.214226 augenrules[1420]: No rules Mar 17 18:36:23.215723 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:36:23.215926 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:36:23.222884 systemd[1]: Finished ensure-sysext.service. Mar 17 18:36:23.229472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:23.229724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:36:23.237308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:36:23.240497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 18:36:23.243995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:36:23.247297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:36:23.247462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:36:23.250264 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 18:36:23.256301 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 18:36:23.258896 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 18:36:23.269355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:36:23.269786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:23.270072 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 18:36:23.273890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 18:36:23.284080 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:36:23.284283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 18:36:23.285187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:23.285343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:36:23.291035 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:36:23.296460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:23.296649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:36:23.297536 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:23.297689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:36:23.301692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:36:23.301875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 18:36:23.311397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:36:23.311577 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:36:23.326250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:36:23.326941 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 18:36:23.342960 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 18:36:23.344263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 18:36:23.347240 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 18:36:23.374755 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:36:23.392026 systemd-networkd[1384]: lo: Link UP Mar 17 18:36:23.392034 systemd-networkd[1384]: lo: Gained carrier Mar 17 18:36:23.399082 systemd-networkd[1384]: Enumeration completed Mar 17 18:36:23.399722 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 18:36:23.406247 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:23.406254 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:36:23.410274 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 18:36:23.412705 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 18:36:23.413142 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 18:36:23.413753 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:23.413761 systemd-networkd[1384]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:36:23.414323 systemd-networkd[1384]: eth0: Link UP Mar 17 18:36:23.414327 systemd-networkd[1384]: eth0: Gained carrier Mar 17 18:36:23.414338 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:23.415645 systemd-resolved[1386]: Positive Trust Anchors: Mar 17 18:36:23.415654 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:36:23.415680 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 18:36:23.419505 systemd-networkd[1384]: eth1: Link UP Mar 17 18:36:23.419514 systemd-networkd[1384]: eth1: Gained carrier Mar 17 18:36:23.419526 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:36:23.421734 systemd-resolved[1386]: Using system hostname 'ci-4152-2-2-f-0ddc29a7f8'. Mar 17 18:36:23.423816 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 18:36:23.424631 systemd[1]: Reached target network.target - Network. Mar 17 18:36:23.425051 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:36:23.429442 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 18:36:23.432734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:36:23.441296 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 18:36:23.443507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:36:23.447389 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 18:36:23.452591 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:36:23.448016 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 18:36:23.449828 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 18:36:23.450276 systemd-networkd[1384]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:36:23.452266 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Mar 17 18:36:23.453579 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 18:36:23.456001 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 18:36:23.457820 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 18:36:23.460859 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:36:23.460892 systemd[1]: Reached target paths.target - Path Units. Mar 17 18:36:23.463368 systemd[1]: Reached target timers.target - Timer Units. Mar 17 18:36:23.468035 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 18:36:23.471925 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 18:36:23.480182 systemd-networkd[1384]: eth0: DHCPv4 address 37.27.32.129/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 17 18:36:23.480469 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Mar 17 18:36:23.481081 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 18:36:23.483137 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Mar 17 18:36:23.483675 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 18:36:23.485905 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 18:36:23.487363 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 18:36:23.488278 systemd[1]: Reached target basic.target - Basic System. Mar 17 18:36:23.488898 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 18:36:23.488986 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 18:36:23.496293 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 18:36:23.498938 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 18:36:23.505707 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 18:36:23.513079 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 18:36:23.519306 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 18:36:23.520919 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 18:36:23.527084 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 18:36:23.533166 jq[1474]: false Mar 17 18:36:23.534302 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 17 18:36:23.540288 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 18:36:23.545375 coreos-metadata[1470]: Mar 17 18:36:23.545 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 17 18:36:23.559269 coreos-metadata[1470]: Mar 17 18:36:23.547 INFO Fetch successful Mar 17 18:36:23.559269 coreos-metadata[1470]: Mar 17 18:36:23.547 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 17 18:36:23.559269 coreos-metadata[1470]: Mar 17 18:36:23.547 INFO Fetch successful Mar 17 18:36:23.547797 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 18:36:23.562206 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 18:36:23.565948 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:36:23.566423 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:36:23.572391 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 18:36:23.574483 extend-filesystems[1475]: Found loop4 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found loop5 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found loop6 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found loop7 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda1 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda2 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda3 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found usr Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda4 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda6 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda7 Mar 17 18:36:23.576378 extend-filesystems[1475]: Found sda9 Mar 17 18:36:23.576378 extend-filesystems[1475]: Checking size of /dev/sda9 Mar 17 18:36:23.582309 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 18:36:23.605528 dbus-daemon[1471]: [system] SELinux support is enabled Mar 17 18:36:23.601488 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:36:23.632876 update_engine[1485]: I20250317 18:36:23.626739 1485 main.cc:92] Flatcar Update Engine starting Mar 17 18:36:23.601683 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 18:36:23.633214 jq[1489]: true Mar 17 18:36:23.601988 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:36:23.602260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 18:36:23.606702 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 18:36:23.629375 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:36:23.629415 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 18:36:23.629875 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:36:23.629891 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 18:36:23.634613 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:36:23.634808 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 18:36:23.648959 systemd[1]: Started update-engine.service - Update Engine. Mar 17 18:36:23.654689 update_engine[1485]: I20250317 18:36:23.654395 1485 update_check_scheduler.cc:74] Next update check in 3m17s Mar 17 18:36:23.660326 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 18:36:23.661439 jq[1496]: true Mar 17 18:36:23.668208 extend-filesystems[1475]: Resized partition /dev/sda9 Mar 17 18:36:23.678023 extend-filesystems[1511]: resize2fs 1.47.1 (20-May-2024) Mar 17 18:36:23.677508 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 18:36:23.677931 systemd-logind[1482]: New seat seat0. Mar 17 18:36:23.694122 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 17 18:36:23.706024 systemd-logind[1482]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 18:36:23.706172 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:36:23.707187 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 18:36:23.720159 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1315) Mar 17 18:36:23.792916 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 18:36:23.799633 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 18:36:23.828999 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 17 18:36:23.847693 extend-filesystems[1511]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 17 18:36:23.847693 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 17 18:36:23.847693 extend-filesystems[1511]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 17 18:36:23.852100 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:36:23.852904 extend-filesystems[1475]: Resized filesystem in /dev/sda9 Mar 17 18:36:23.852904 extend-filesystems[1475]: Found sr0 Mar 17 18:36:23.853885 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:36:23.854165 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 18:36:23.867717 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:36:23.871545 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:36:23.874249 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 18:36:23.886377 systemd[1]: Starting sshkeys.service... Mar 17 18:36:23.898014 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 18:36:23.909387 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 18:36:23.910249 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 18:36:23.921133 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 18:36:23.928722 coreos-metadata[1557]: Mar 17 18:36:23.928 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 17 18:36:23.929791 coreos-metadata[1557]: Mar 17 18:36:23.929 INFO Fetch successful Mar 17 18:36:23.931766 unknown[1557]: wrote ssh authorized keys file for user: core Mar 17 18:36:23.946451 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:36:23.946946 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 18:36:23.958439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 18:36:23.963126 containerd[1503]: time="2025-03-17T18:36:23.963034090Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 18:36:23.967123 update-ssh-keys[1566]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:36:23.970386 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 18:36:23.973043 systemd[1]: Finished sshkeys.service. Mar 17 18:36:23.977958 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 18:36:23.986457 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 18:36:23.988781 containerd[1503]: time="2025-03-17T18:36:23.988618523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:23.989409 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990533083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990558782Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990573389Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990728861Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990743328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990808129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990818879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990983839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.990996352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.991007723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991119 containerd[1503]: time="2025-03-17T18:36:23.991015969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991333 containerd[1503]: time="2025-03-17T18:36:23.991316332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991621 containerd[1503]: time="2025-03-17T18:36:23.991603781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991785 containerd[1503]: time="2025-03-17T18:36:23.991768991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:23.991833 containerd[1503]: time="2025-03-17T18:36:23.991822251Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:36:23.991969 containerd[1503]: time="2025-03-17T18:36:23.991953557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:36:23.992064 containerd[1503]: time="2025-03-17T18:36:23.992050379Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:36:23.993471 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 18:36:24.002176 containerd[1503]: time="2025-03-17T18:36:24.002154745Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:36:24.002266 containerd[1503]: time="2025-03-17T18:36:24.002252408Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:36:24.002322 containerd[1503]: time="2025-03-17T18:36:24.002309435Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 18:36:24.002373 containerd[1503]: time="2025-03-17T18:36:24.002361243Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 18:36:24.002435 containerd[1503]: time="2025-03-17T18:36:24.002422668Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:36:24.002629 containerd[1503]: time="2025-03-17T18:36:24.002604659Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:36:24.002951 containerd[1503]: time="2025-03-17T18:36:24.002907507Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:36:24.003137 containerd[1503]: time="2025-03-17T18:36:24.003095209Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 18:36:24.003162 containerd[1503]: time="2025-03-17T18:36:24.003140393Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 18:36:24.003162 containerd[1503]: time="2025-03-17T18:36:24.003157215Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 18:36:24.003205 containerd[1503]: time="2025-03-17T18:36:24.003170450Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003205 containerd[1503]: time="2025-03-17T18:36:24.003183474Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003205 containerd[1503]: time="2025-03-17T18:36:24.003193693Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003252 containerd[1503]: time="2025-03-17T18:36:24.003205536Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003252 containerd[1503]: time="2025-03-17T18:36:24.003219191Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003252 containerd[1503]: time="2025-03-17T18:36:24.003231274Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003252 containerd[1503]: time="2025-03-17T18:36:24.003241453Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003252 containerd[1503]: time="2025-03-17T18:36:24.003251401Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:36:24.003335 containerd[1503]: time="2025-03-17T18:36:24.003270928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003335 containerd[1503]: time="2025-03-17T18:36:24.003283091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003335 containerd[1503]: time="2025-03-17T18:36:24.003293430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003335 containerd[1503]: time="2025-03-17T18:36:24.003304521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003335 containerd[1503]: time="2025-03-17T18:36:24.003314209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003335 containerd[1503]: time="2025-03-17T18:36:24.003325621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003341741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003353343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003364333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003377318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003388338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003397836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003408746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003420398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 18:36:24.003440 containerd[1503]: time="2025-03-17T18:36:24.003437230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003448601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003458340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003513634Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003531617Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003540113Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003549891Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003557516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003568827Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 18:36:24.003575 containerd[1503]: time="2025-03-17T18:36:24.003577283Z" level=info msg="NRI interface is disabled by configuration." Mar 17 18:36:24.003715 containerd[1503]: time="2025-03-17T18:36:24.003586660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:36:24.003863 containerd[1503]: time="2025-03-17T18:36:24.003812584Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:36:24.003863 containerd[1503]: time="2025-03-17T18:36:24.003857198Z" level=info msg="Connect containerd service" Mar 17 18:36:24.004001 containerd[1503]: time="2025-03-17T18:36:24.003895940Z" level=info msg="using legacy CRI server" Mar 17 18:36:24.004001 containerd[1503]: time="2025-03-17T18:36:24.003903033Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 18:36:24.004032 containerd[1503]: time="2025-03-17T18:36:24.004021165Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:36:24.004672 containerd[1503]: time="2025-03-17T18:36:24.004633934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:36:24.004945 containerd[1503]: time="2025-03-17T18:36:24.004917135Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:36:24.004996 containerd[1503]: time="2025-03-17T18:36:24.004971607Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:36:24.005044 containerd[1503]: time="2025-03-17T18:36:24.005013586Z" level=info msg="Start subscribing containerd event" Mar 17 18:36:24.005065 containerd[1503]: time="2025-03-17T18:36:24.005045115Z" level=info msg="Start recovering state" Mar 17 18:36:24.005145 containerd[1503]: time="2025-03-17T18:36:24.005121468Z" level=info msg="Start event monitor" Mar 17 18:36:24.005145 containerd[1503]: time="2025-03-17T18:36:24.005143289Z" level=info msg="Start snapshots syncer" Mar 17 18:36:24.005182 containerd[1503]: time="2025-03-17T18:36:24.005152076Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:36:24.005182 containerd[1503]: time="2025-03-17T18:36:24.005159009Z" level=info msg="Start streaming server" Mar 17 18:36:24.005290 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 18:36:24.008431 containerd[1503]: time="2025-03-17T18:36:24.008305589Z" level=info msg="containerd successfully booted in 0.046021s" Mar 17 18:36:24.468382 systemd-networkd[1384]: eth0: Gained IPv6LL Mar 17 18:36:24.469096 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Mar 17 18:36:24.471335 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 18:36:24.473623 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 18:36:24.487321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:36:24.490381 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 18:36:24.527754 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 18:36:24.852548 systemd-networkd[1384]: eth1: Gained IPv6LL Mar 17 18:36:24.853070 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Mar 17 18:36:25.275050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:36:25.276199 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 18:36:25.279334 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:36:25.279398 systemd[1]: Startup finished in 1.221s (kernel) + 5.407s (initrd) + 4.220s (userspace) = 10.849s. Mar 17 18:36:25.808709 kubelet[1594]: E0317 18:36:25.808649 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:25.812400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:25.812580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:36:36.063329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:36:36.070403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:36:36.237174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:36:36.241050 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:36:36.283914 kubelet[1613]: E0317 18:36:36.283858 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:36.292828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:36.293026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:36:46.543532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:36:46.550504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:36:46.680840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:36:46.685648 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:36:46.724078 kubelet[1629]: E0317 18:36:46.723995 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:46.727277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:46.727513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:36:56.514667 systemd-timesyncd[1439]: Contacted time server 144.76.66.157:123 (2.flatcar.pool.ntp.org). Mar 17 18:36:56.514764 systemd-timesyncd[1439]: Initial clock synchronization to Mon 2025-03-17 18:36:56.514444 UTC. Mar 17 18:36:56.515304 systemd-resolved[1386]: Clock change detected. Flushing caches. Mar 17 18:36:58.236135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:36:58.248934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:36:58.372532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:36:58.379073 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:36:58.411719 kubelet[1644]: E0317 18:36:58.411602 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:58.414858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:58.415096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:37:08.486175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 18:37:08.498010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:37:08.618765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:37:08.622680 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:37:08.658491 kubelet[1659]: E0317 18:37:08.658428 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:37:08.662057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:37:08.662242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:37:09.722043 update_engine[1485]: I20250317 18:37:09.721860 1485 update_attempter.cc:509] Updating boot flags... Mar 17 18:37:09.778774 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1675) Mar 17 18:37:09.831815 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1671) Mar 17 18:37:18.736041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 18:37:18.741125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:37:18.864825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:37:18.875965 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:37:18.909799 kubelet[1692]: E0317 18:37:18.909754 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:37:18.913496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:37:18.913685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:37:28.986378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 18:37:28.992991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:37:29.162235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:37:29.166037 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:37:29.195534 kubelet[1707]: E0317 18:37:29.195447 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:37:29.197903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:37:29.198300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:37:39.236629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 18:37:39.244051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:37:39.439332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:37:39.443520 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:37:39.475363 kubelet[1722]: E0317 18:37:39.475295 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:37:39.479358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:37:39.479552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:37:49.486111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 18:37:49.490870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:37:49.637915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:37:49.641830 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:37:49.672948 kubelet[1738]: E0317 18:37:49.672838 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:37:49.676244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:37:49.676435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:37:59.736120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 17 18:37:59.743053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:37:59.883978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:37:59.884056 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:37:59.926625 kubelet[1754]: E0317 18:37:59.926541 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:37:59.931995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:37:59.932202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:38:09.986122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 17 18:38:09.994866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:38:10.132087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:38:10.135903 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:38:10.168347 kubelet[1770]: E0317 18:38:10.168298 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:38:10.171658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:38:10.171894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:38:20.236188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 17 18:38:20.248919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:38:20.382091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:38:20.385900 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:38:20.422647 kubelet[1784]: E0317 18:38:20.422595 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:38:20.425253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:38:20.425492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:38:22.302434 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 18:38:22.308021 systemd[1]: Started sshd@0-37.27.32.129:22-139.178.68.195:43628.service - OpenSSH per-connection server daemon (139.178.68.195:43628). Mar 17 18:38:23.292095 sshd[1793]: Accepted publickey for core from 139.178.68.195 port 43628 ssh2: RSA SHA256:KT8G9RK2px9p9YgeH/3tYJpswueF9x/qD/yUNmSl7bQ Mar 17 18:38:23.294941 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:38:23.305056 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 18:38:23.311999 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 18:38:23.314316 systemd-logind[1482]: New session 1 of user core. Mar 17 18:38:23.325948 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 18:38:23.338061 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 18:38:23.342399 (systemd)[1797]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:23.453761 systemd[1797]: Queued start job for default target default.target. Mar 17 18:38:23.460895 systemd[1797]: Created slice app.slice - User Application Slice. Mar 17 18:38:23.460922 systemd[1797]: Reached target paths.target - Paths. Mar 17 18:38:23.460935 systemd[1797]: Reached target timers.target - Timers. Mar 17 18:38:23.462362 systemd[1797]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 18:38:23.475224 systemd[1797]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 18:38:23.475460 systemd[1797]: Reached target sockets.target - Sockets. Mar 17 18:38:23.475481 systemd[1797]: Reached target basic.target - Basic System. Mar 17 18:38:23.475517 systemd[1797]: Reached target default.target - Main User Target. Mar 17 18:38:23.475549 systemd[1797]: Startup finished in 124ms. Mar 17 18:38:23.475842 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 18:38:23.477467 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 18:38:24.163831 systemd[1]: Started sshd@1-37.27.32.129:22-139.178.68.195:43630.service - OpenSSH per-connection server daemon (139.178.68.195:43630). Mar 17 18:38:25.140767 sshd[1808]: Accepted publickey for core from 139.178.68.195 port 43630 ssh2: RSA SHA256:KT8G9RK2px9p9YgeH/3tYJpswueF9x/qD/yUNmSl7bQ Mar 17 18:38:25.142341 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:38:25.147470 systemd-logind[1482]: New session 2 of user core. Mar 17 18:38:25.158931 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 18:38:25.818092 sshd[1810]: Connection closed by 139.178.68.195 port 43630 Mar 17 18:38:25.819018 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:25.822240 systemd[1]: sshd@1-37.27.32.129:22-139.178.68.195:43630.service: Deactivated successfully. Mar 17 18:38:25.824243 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:38:25.825694 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:38:25.827064 systemd-logind[1482]: Removed session 2. Mar 17 18:38:26.016819 systemd[1]: Started sshd@2-37.27.32.129:22-139.178.68.195:33020.service - OpenSSH per-connection server daemon (139.178.68.195:33020). Mar 17 18:38:27.086552 sshd[1815]: Accepted publickey for core from 139.178.68.195 port 33020 ssh2: RSA SHA256:KT8G9RK2px9p9YgeH/3tYJpswueF9x/qD/yUNmSl7bQ Mar 17 18:38:27.088033 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:38:27.092447 systemd-logind[1482]: New session 3 of user core. Mar 17 18:38:27.098892 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 18:38:27.821257 sshd[1817]: Connection closed by 139.178.68.195 port 33020 Mar 17 18:38:27.821996 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:27.826048 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:38:27.826787 systemd[1]: sshd@2-37.27.32.129:22-139.178.68.195:33020.service: Deactivated successfully. Mar 17 18:38:27.829103 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:38:27.830053 systemd-logind[1482]: Removed session 3. Mar 17 18:38:27.975949 systemd[1]: Started sshd@3-37.27.32.129:22-139.178.68.195:33024.service - OpenSSH per-connection server daemon (139.178.68.195:33024). Mar 17 18:38:28.948669 sshd[1822]: Accepted publickey for core from 139.178.68.195 port 33024 ssh2: RSA SHA256:KT8G9RK2px9p9YgeH/3tYJpswueF9x/qD/yUNmSl7bQ Mar 17 18:38:28.950780 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:38:28.956897 systemd-logind[1482]: New session 4 of user core. Mar 17 18:38:28.969957 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 18:38:29.622519 sshd[1824]: Connection closed by 139.178.68.195 port 33024 Mar 17 18:38:29.623155 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:29.626878 systemd[1]: sshd@3-37.27.32.129:22-139.178.68.195:33024.service: Deactivated successfully. Mar 17 18:38:29.628854 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:38:29.629505 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:38:29.630616 systemd-logind[1482]: Removed session 4. Mar 17 18:38:29.798239 systemd[1]: Started sshd@4-37.27.32.129:22-139.178.68.195:33034.service - OpenSSH per-connection server daemon (139.178.68.195:33034). Mar 17 18:38:30.485935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 17 18:38:30.491080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:38:30.611593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:38:30.615352 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:38:30.645528 kubelet[1839]: E0317 18:38:30.645461 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:38:30.649067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:38:30.649248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:38:30.793345 sshd[1829]: Accepted publickey for core from 139.178.68.195 port 33034 ssh2: RSA SHA256:KT8G9RK2px9p9YgeH/3tYJpswueF9x/qD/yUNmSl7bQ Mar 17 18:38:30.794637 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:38:30.799449 systemd-logind[1482]: New session 5 of user core. Mar 17 18:38:30.806845 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 18:38:31.321386 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:38:31.321868 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:38:31.338996 sudo[1847]: pam_unix(sudo:session): session closed for user root Mar 17 18:38:31.496894 sshd[1846]: Connection closed by 139.178.68.195 port 33034 Mar 17 18:38:31.497827 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:31.501097 systemd[1]: sshd@4-37.27.32.129:22-139.178.68.195:33034.service: Deactivated successfully. Mar 17 18:38:31.503244 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:38:31.504579 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:38:31.505887 systemd-logind[1482]: Removed session 5. Mar 17 18:38:31.664889 systemd[1]: Started sshd@5-37.27.32.129:22-139.178.68.195:33038.service - OpenSSH per-connection server daemon (139.178.68.195:33038). Mar 17 18:38:32.641407 sshd[1852]: Accepted publickey for core from 139.178.68.195 port 33038 ssh2: RSA SHA256:KT8G9RK2px9p9YgeH/3tYJpswueF9x/qD/yUNmSl7bQ Mar 17 18:38:32.643054 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:38:32.647855 systemd-logind[1482]: New session 6 of user core. Mar 17 18:38:32.653831 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 18:38:33.160612 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:38:33.161058 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:38:33.165073 sudo[1856]: pam_unix(sudo:session): session closed for user root Mar 17 18:38:33.171677 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 18:38:33.172080 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:38:33.190066 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:38:33.220189 augenrules[1878]: No rules Mar 17 18:38:33.221055 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:38:33.221397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:38:33.222648 sudo[1855]: pam_unix(sudo:session): session closed for user root Mar 17 18:38:33.380133 sshd[1854]: Connection closed by 139.178.68.195 port 33038 Mar 17 18:38:33.380768 sshd-session[1852]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:33.384602 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:38:33.385314 systemd[1]: sshd@5-37.27.32.129:22-139.178.68.195:33038.service: Deactivated successfully. Mar 17 18:38:33.387397 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:38:33.388268 systemd-logind[1482]: Removed session 6. Mar 17 18:38:33.552123 systemd[1]: Started sshd@6-37.27.32.129:22-139.178.68.195:33040.service - OpenSSH per-connection server daemon (139.178.68.195:33040). Mar 17 18:38:34.538637 sshd[1886]: Accepted publickey for core from 139.178.68.195 port 33040 ssh2: RSA SHA256:KT8G9RK2px9p9YgeH/3tYJpswueF9x/qD/yUNmSl7bQ Mar 17 18:38:34.540366 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:38:34.545084 systemd-logind[1482]: New session 7 of user core. Mar 17 18:38:34.551904 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 18:38:35.060073 sudo[1889]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:38:35.060460 sudo[1889]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:38:35.566356 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:38:35.574925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:38:35.598763 systemd[1]: Reloading requested from client PID 1920 ('systemctl') (unit session-7.scope)... Mar 17 18:38:35.598885 systemd[1]: Reloading... Mar 17 18:38:35.716732 zram_generator::config[1974]: No configuration found. Mar 17 18:38:35.789654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:38:35.865571 systemd[1]: Reloading finished in 266 ms. Mar 17 18:38:35.922004 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:38:35.922306 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:38:35.922911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:38:35.929094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:38:36.044431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:38:36.048399 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:38:36.081376 kubelet[2014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:38:36.081376 kubelet[2014]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:38:36.081376 kubelet[2014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:38:36.081805 kubelet[2014]: I0317 18:38:36.081442 2014 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:38:36.373326 kubelet[2014]: I0317 18:38:36.373279 2014 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:38:36.373326 kubelet[2014]: I0317 18:38:36.373311 2014 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:38:36.373587 kubelet[2014]: I0317 18:38:36.373562 2014 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:38:36.395055 kubelet[2014]: I0317 18:38:36.394930 2014 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:38:36.401541 kubelet[2014]: E0317 18:38:36.401460 2014 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:38:36.401541 kubelet[2014]: I0317 18:38:36.401486 2014 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:38:36.406044 kubelet[2014]: I0317 18:38:36.406012 2014 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:38:36.406147 kubelet[2014]: I0317 18:38:36.406120 2014 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:38:36.406524 kubelet[2014]: I0317 18:38:36.406242 2014 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:38:36.406524 kubelet[2014]: I0317 18:38:36.406275 2014 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:38:36.406524 kubelet[2014]: I0317 18:38:36.406519 2014 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:38:36.406524 kubelet[2014]: I0317 18:38:36.406527 2014 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:38:36.406664 kubelet[2014]: I0317 18:38:36.406625 2014 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:38:36.408750 kubelet[2014]: I0317 18:38:36.408693 2014 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:38:36.408750 kubelet[2014]: I0317 18:38:36.408727 2014 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:38:36.408819 kubelet[2014]: I0317 18:38:36.408758 2014 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:38:36.408819 kubelet[2014]: I0317 18:38:36.408772 2014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:38:36.414176 kubelet[2014]: E0317 18:38:36.413728 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:36.414176 kubelet[2014]: E0317 18:38:36.413996 2014 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:36.414505 kubelet[2014]: I0317 18:38:36.414404 2014 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:38:36.416016 kubelet[2014]: I0317 18:38:36.415912 2014 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:38:36.416016 kubelet[2014]: W0317 18:38:36.415976 2014 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:38:36.416614 kubelet[2014]: I0317 18:38:36.416478 2014 server.go:1269] "Started kubelet" Mar 17 18:38:36.416657 kubelet[2014]: I0317 18:38:36.416571 2014 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:38:36.417432 kubelet[2014]: I0317 18:38:36.417409 2014 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:38:36.419227 kubelet[2014]: I0317 18:38:36.419202 2014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:38:36.422269 kubelet[2014]: I0317 18:38:36.420830 2014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:38:36.422269 kubelet[2014]: I0317 18:38:36.421015 2014 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:38:36.422269 kubelet[2014]: I0317 18:38:36.422161 2014 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:38:36.426264 kubelet[2014]: E0317 18:38:36.426248 2014 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:38:36.426528 kubelet[2014]: E0317 18:38:36.426514 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:36.426623 kubelet[2014]: I0317 18:38:36.426611 2014 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:38:36.426864 kubelet[2014]: I0317 18:38:36.426849 2014 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:38:36.426956 kubelet[2014]: I0317 18:38:36.426946 2014 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:38:36.427444 kubelet[2014]: I0317 18:38:36.427429 2014 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:38:36.427591 kubelet[2014]: I0317 18:38:36.427574 2014 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:38:36.429728 kubelet[2014]: I0317 18:38:36.429055 2014 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:38:36.440952 kubelet[2014]: I0317 18:38:36.440847 2014 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:38:36.441125 kubelet[2014]: I0317 18:38:36.441054 2014 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:38:36.441190 kubelet[2014]: I0317 18:38:36.441181 2014 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:38:36.443302 kubelet[2014]: I0317 18:38:36.443271 2014 policy_none.go:49] "None policy: Start" Mar 17 18:38:36.443770 kubelet[2014]: E0317 18:38:36.443592 2014 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 18:38:36.443889 kubelet[2014]: W0317 18:38:36.443875 2014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 18:38:36.444065 kubelet[2014]: E0317 18:38:36.444046 2014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 17 18:38:36.444533 kubelet[2014]: I0317 18:38:36.444520 2014 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:38:36.444623 kubelet[2014]: I0317 18:38:36.444613 2014 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:38:36.446829 kubelet[2014]: W0317 18:38:36.446811 2014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 18:38:36.446917 kubelet[2014]: E0317 18:38:36.446902 2014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:38:36.447108 kubelet[2014]: W0317 18:38:36.447093 2014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 18:38:36.447181 kubelet[2014]: E0317 18:38:36.447167 2014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:38:36.447491 kubelet[2014]: E0317 18:38:36.443621 2014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.182dab1014a56af2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-03-17 18:38:36.416461554 +0000 UTC m=+0.363878227,LastTimestamp:2025-03-17 18:38:36.416461554 +0000 UTC m=+0.363878227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Mar 17 18:38:36.456940 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 18:38:36.458355 kubelet[2014]: E0317 18:38:36.457951 2014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.182dab10153a99ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-03-17 18:38:36.42623838 +0000 UTC m=+0.373655044,LastTimestamp:2025-03-17 18:38:36.42623838 +0000 UTC m=+0.373655044,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Mar 17 18:38:36.464805 kubelet[2014]: E0317 18:38:36.464635 2014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.182dab10160651ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.4 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-03-17 18:38:36.439589326 +0000 UTC m=+0.387005989,LastTimestamp:2025-03-17 18:38:36.439589326 +0000 UTC m=+0.387005989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Mar 17 18:38:36.471248 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 18:38:36.475304 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 18:38:36.483429 kubelet[2014]: I0317 18:38:36.482686 2014 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:38:36.483429 kubelet[2014]: I0317 18:38:36.482885 2014 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:38:36.483429 kubelet[2014]: I0317 18:38:36.482895 2014 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:38:36.483429 kubelet[2014]: I0317 18:38:36.483315 2014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:38:36.486696 kubelet[2014]: E0317 18:38:36.486675 2014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Mar 17 18:38:36.487444 kubelet[2014]: I0317 18:38:36.487372 2014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:38:36.489050 kubelet[2014]: I0317 18:38:36.489027 2014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:38:36.489098 kubelet[2014]: I0317 18:38:36.489077 2014 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:38:36.489098 kubelet[2014]: I0317 18:38:36.489093 2014 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:38:36.489213 kubelet[2014]: E0317 18:38:36.489193 2014 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:38:36.584529 kubelet[2014]: I0317 18:38:36.584492 2014 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.4" Mar 17 18:38:36.590333 kubelet[2014]: I0317 18:38:36.590303 2014 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.4" Mar 17 18:38:36.590333 kubelet[2014]: E0317 18:38:36.590329 2014 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" Mar 17 18:38:36.619009 kubelet[2014]: E0317 18:38:36.618959 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:36.719991 kubelet[2014]: E0317 18:38:36.719856 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:36.820511 kubelet[2014]: E0317 18:38:36.820434 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:36.921618 kubelet[2014]: E0317 18:38:36.921561 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:37.022170 kubelet[2014]: E0317 18:38:37.022115 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:37.070632 sudo[1889]: pam_unix(sudo:session): session closed for user root Mar 17 18:38:37.122861 kubelet[2014]: E0317 18:38:37.122808 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:37.223434 kubelet[2014]: E0317 18:38:37.223384 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:37.228696 sshd[1888]: Connection closed by 139.178.68.195 port 33040 Mar 17 18:38:37.229261 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:37.232978 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:38:37.233902 systemd[1]: sshd@6-37.27.32.129:22-139.178.68.195:33040.service: Deactivated successfully. Mar 17 18:38:37.235938 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:38:37.237020 systemd-logind[1482]: Removed session 7. Mar 17 18:38:37.324216 kubelet[2014]: E0317 18:38:37.324077 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:37.375759 kubelet[2014]: I0317 18:38:37.375700 2014 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 18:38:37.375934 kubelet[2014]: W0317 18:38:37.375898 2014 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:38:37.414323 kubelet[2014]: E0317 18:38:37.414291 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:37.424667 kubelet[2014]: E0317 18:38:37.424633 2014 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Mar 17 18:38:37.526198 kubelet[2014]: I0317 18:38:37.526169 2014 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 18:38:37.526444 containerd[1503]: time="2025-03-17T18:38:37.526401844Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:38:37.527005 kubelet[2014]: I0317 18:38:37.526985 2014 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 18:38:38.413014 kubelet[2014]: I0317 18:38:38.412961 2014 apiserver.go:52] "Watching apiserver" Mar 17 18:38:38.414537 kubelet[2014]: E0317 18:38:38.414492 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:38.417335 kubelet[2014]: E0317 18:38:38.416875 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:38.426898 systemd[1]: Created slice kubepods-besteffort-pod5b07f99a_e2e6_4cbc_828f_9c3ee248f2a5.slice - libcontainer container kubepods-besteffort-pod5b07f99a_e2e6_4cbc_828f_9c3ee248f2a5.slice. Mar 17 18:38:38.427565 kubelet[2014]: I0317 18:38:38.427426 2014 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:38:38.437645 kubelet[2014]: I0317 18:38:38.436791 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e605fe16-99a5-491e-bb58-ea92f025ef2b-registration-dir\") pod \"csi-node-driver-hgspb\" (UID: \"e605fe16-99a5-491e-bb58-ea92f025ef2b\") " pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:38.437645 kubelet[2014]: I0317 18:38:38.436835 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5-kube-proxy\") pod \"kube-proxy-tvjs6\" (UID: \"5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5\") " pod="kube-system/kube-proxy-tvjs6" Mar 17 18:38:38.437645 kubelet[2014]: I0317 18:38:38.436852 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-flexvol-driver-host\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.437645 kubelet[2014]: I0317 18:38:38.436867 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dn7\" (UniqueName: \"kubernetes.io/projected/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-kube-api-access-p4dn7\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.437645 kubelet[2014]: I0317 18:38:38.436881 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e605fe16-99a5-491e-bb58-ea92f025ef2b-varrun\") pod \"csi-node-driver-hgspb\" (UID: \"e605fe16-99a5-491e-bb58-ea92f025ef2b\") " pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:38.436935 systemd[1]: Created slice kubepods-besteffort-pod9830dfb6_6956_4ec8_b00b_eb4f0aa4833e.slice - libcontainer container kubepods-besteffort-pod9830dfb6_6956_4ec8_b00b_eb4f0aa4833e.slice. Mar 17 18:38:38.437875 kubelet[2014]: I0317 18:38:38.436903 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e605fe16-99a5-491e-bb58-ea92f025ef2b-kubelet-dir\") pod \"csi-node-driver-hgspb\" (UID: \"e605fe16-99a5-491e-bb58-ea92f025ef2b\") " pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:38.437875 kubelet[2014]: I0317 18:38:38.436918 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-xtables-lock\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.437875 kubelet[2014]: I0317 18:38:38.436933 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-cni-bin-dir\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.437875 kubelet[2014]: I0317 18:38:38.436947 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-cni-net-dir\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.437875 kubelet[2014]: I0317 18:38:38.436968 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-var-lib-calico\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.437992 kubelet[2014]: I0317 18:38:38.436983 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drg46\" (UniqueName: \"kubernetes.io/projected/e605fe16-99a5-491e-bb58-ea92f025ef2b-kube-api-access-drg46\") pod \"csi-node-driver-hgspb\" (UID: \"e605fe16-99a5-491e-bb58-ea92f025ef2b\") " pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:38.437992 kubelet[2014]: I0317 18:38:38.437003 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5-lib-modules\") pod \"kube-proxy-tvjs6\" (UID: \"5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5\") " pod="kube-system/kube-proxy-tvjs6" Mar 17 18:38:38.437992 kubelet[2014]: I0317 18:38:38.437017 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w58f\" (UniqueName: \"kubernetes.io/projected/5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5-kube-api-access-5w58f\") pod \"kube-proxy-tvjs6\" (UID: \"5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5\") " pod="kube-system/kube-proxy-tvjs6" Mar 17 18:38:38.437992 kubelet[2014]: I0317 18:38:38.437032 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-tigera-ca-bundle\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.437992 kubelet[2014]: I0317 18:38:38.437045 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-node-certs\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.438118 kubelet[2014]: I0317 18:38:38.437067 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-var-run-calico\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.438118 kubelet[2014]: I0317 18:38:38.437085 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e605fe16-99a5-491e-bb58-ea92f025ef2b-socket-dir\") pod \"csi-node-driver-hgspb\" (UID: \"e605fe16-99a5-491e-bb58-ea92f025ef2b\") " pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:38.438118 kubelet[2014]: I0317 18:38:38.437100 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5-xtables-lock\") pod \"kube-proxy-tvjs6\" (UID: \"5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5\") " pod="kube-system/kube-proxy-tvjs6" Mar 17 18:38:38.438118 kubelet[2014]: I0317 18:38:38.437117 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-lib-modules\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.438118 kubelet[2014]: I0317 18:38:38.437131 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-policysync\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.438235 kubelet[2014]: I0317 18:38:38.437144 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9830dfb6-6956-4ec8-b00b-eb4f0aa4833e-cni-log-dir\") pod \"calico-node-69pgb\" (UID: \"9830dfb6-6956-4ec8-b00b-eb4f0aa4833e\") " pod="calico-system/calico-node-69pgb" Mar 17 18:38:38.540478 kubelet[2014]: E0317 18:38:38.540403 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.540478 kubelet[2014]: W0317 18:38:38.540469 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.540683 kubelet[2014]: E0317 18:38:38.540493 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.541727 kubelet[2014]: E0317 18:38:38.540824 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.541727 kubelet[2014]: W0317 18:38:38.540840 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.541727 kubelet[2014]: E0317 18:38:38.540854 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.541727 kubelet[2014]: E0317 18:38:38.541317 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.541727 kubelet[2014]: W0317 18:38:38.541329 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.541727 kubelet[2014]: E0317 18:38:38.541380 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.542011 kubelet[2014]: E0317 18:38:38.541831 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.542011 kubelet[2014]: W0317 18:38:38.541844 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.542011 kubelet[2014]: E0317 18:38:38.541903 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.542297 kubelet[2014]: E0317 18:38:38.542268 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.542368 kubelet[2014]: W0317 18:38:38.542316 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.542368 kubelet[2014]: E0317 18:38:38.542345 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.542732 kubelet[2014]: E0317 18:38:38.542692 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.542801 kubelet[2014]: W0317 18:38:38.542747 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.542846 kubelet[2014]: E0317 18:38:38.542832 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.543216 kubelet[2014]: E0317 18:38:38.543191 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.543216 kubelet[2014]: W0317 18:38:38.543210 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.543356 kubelet[2014]: E0317 18:38:38.543321 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.545945 kubelet[2014]: E0317 18:38:38.545914 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.546033 kubelet[2014]: W0317 18:38:38.546015 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.546369 kubelet[2014]: E0317 18:38:38.546351 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.546453 kubelet[2014]: W0317 18:38:38.546435 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.546822 kubelet[2014]: E0317 18:38:38.546803 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.546926 kubelet[2014]: W0317 18:38:38.546908 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.547224 kubelet[2014]: E0317 18:38:38.547206 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.547316 kubelet[2014]: W0317 18:38:38.547298 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.547613 kubelet[2014]: E0317 18:38:38.547595 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.547746 kubelet[2014]: W0317 18:38:38.547727 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.548051 kubelet[2014]: E0317 18:38:38.548033 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.548238 kubelet[2014]: W0317 18:38:38.548122 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.548238 kubelet[2014]: E0317 18:38:38.548144 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.548497 kubelet[2014]: E0317 18:38:38.548480 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.548739 kubelet[2014]: W0317 18:38:38.548563 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.548739 kubelet[2014]: E0317 18:38:38.548582 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.549455 kubelet[2014]: E0317 18:38:38.549436 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.550828 kubelet[2014]: W0317 18:38:38.550808 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.551036 kubelet[2014]: E0317 18:38:38.550923 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.551036 kubelet[2014]: E0317 18:38:38.550775 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.551497 kubelet[2014]: E0317 18:38:38.551378 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.551497 kubelet[2014]: W0317 18:38:38.551394 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.551497 kubelet[2014]: E0317 18:38:38.551408 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.552036 kubelet[2014]: E0317 18:38:38.551829 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.552036 kubelet[2014]: W0317 18:38:38.551844 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.552036 kubelet[2014]: E0317 18:38:38.551857 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.552362 kubelet[2014]: E0317 18:38:38.552346 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.552458 kubelet[2014]: W0317 18:38:38.552441 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.552724 kubelet[2014]: E0317 18:38:38.552522 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.552724 kubelet[2014]: E0317 18:38:38.550789 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.553931 kubelet[2014]: E0317 18:38:38.553911 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.554043 kubelet[2014]: W0317 18:38:38.554022 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.554236 kubelet[2014]: E0317 18:38:38.554117 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.554236 kubelet[2014]: E0317 18:38:38.550754 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.554492 kubelet[2014]: E0317 18:38:38.554475 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.554691 kubelet[2014]: W0317 18:38:38.554563 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.554691 kubelet[2014]: E0317 18:38:38.554585 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.556936 kubelet[2014]: E0317 18:38:38.550766 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.556936 kubelet[2014]: E0317 18:38:38.550783 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.557120 kubelet[2014]: E0317 18:38:38.557103 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.557209 kubelet[2014]: W0317 18:38:38.557192 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.557412 kubelet[2014]: E0317 18:38:38.557278 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.557692 kubelet[2014]: E0317 18:38:38.557673 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.557946 kubelet[2014]: W0317 18:38:38.557839 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.558202 kubelet[2014]: E0317 18:38:38.558026 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.558722 kubelet[2014]: E0317 18:38:38.558667 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.558722 kubelet[2014]: W0317 18:38:38.558683 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.559002 kubelet[2014]: E0317 18:38:38.558981 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.559349 kubelet[2014]: E0317 18:38:38.559238 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.559349 kubelet[2014]: W0317 18:38:38.559254 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.559518 kubelet[2014]: E0317 18:38:38.559472 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.559895 kubelet[2014]: E0317 18:38:38.559784 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.559895 kubelet[2014]: W0317 18:38:38.559800 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.560056 kubelet[2014]: E0317 18:38:38.560023 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.560363 kubelet[2014]: E0317 18:38:38.560315 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.560363 kubelet[2014]: W0317 18:38:38.560330 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.560562 kubelet[2014]: E0317 18:38:38.560437 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.560963 kubelet[2014]: E0317 18:38:38.560912 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.560963 kubelet[2014]: W0317 18:38:38.560929 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.561298 kubelet[2014]: E0317 18:38:38.561165 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.561582 kubelet[2014]: E0317 18:38:38.561565 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.561686 kubelet[2014]: W0317 18:38:38.561667 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.561925 kubelet[2014]: E0317 18:38:38.561800 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.562212 kubelet[2014]: E0317 18:38:38.562196 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.562307 kubelet[2014]: W0317 18:38:38.562290 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.562526 kubelet[2014]: E0317 18:38:38.562398 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.563334 kubelet[2014]: E0317 18:38:38.563196 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.563334 kubelet[2014]: W0317 18:38:38.563211 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.563334 kubelet[2014]: E0317 18:38:38.563225 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.567771 kubelet[2014]: E0317 18:38:38.567668 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.567771 kubelet[2014]: W0317 18:38:38.567682 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.567771 kubelet[2014]: E0317 18:38:38.567696 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.572927 kubelet[2014]: E0317 18:38:38.570700 2014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:38:38.572927 kubelet[2014]: W0317 18:38:38.570739 2014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:38:38.572927 kubelet[2014]: E0317 18:38:38.570751 2014 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:38:38.735025 containerd[1503]: time="2025-03-17T18:38:38.734866484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvjs6,Uid:5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5,Namespace:kube-system,Attempt:0,}" Mar 17 18:38:38.740727 containerd[1503]: time="2025-03-17T18:38:38.740511438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69pgb,Uid:9830dfb6-6956-4ec8-b00b-eb4f0aa4833e,Namespace:calico-system,Attempt:0,}" Mar 17 18:38:39.251670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894945837.mount: Deactivated successfully. Mar 17 18:38:39.260015 containerd[1503]: time="2025-03-17T18:38:39.259924075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:38:39.261550 containerd[1503]: time="2025-03-17T18:38:39.261508150Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:38:39.262410 containerd[1503]: time="2025-03-17T18:38:39.262371997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Mar 17 18:38:39.263462 containerd[1503]: time="2025-03-17T18:38:39.263419758Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:38:39.263874 containerd[1503]: time="2025-03-17T18:38:39.263839366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:38:39.269750 containerd[1503]: time="2025-03-17T18:38:39.269092666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:38:39.270554 containerd[1503]: time="2025-03-17T18:38:39.270467146Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.825684ms" Mar 17 18:38:39.274863 containerd[1503]: time="2025-03-17T18:38:39.274753318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.733703ms" Mar 17 18:38:39.371525 containerd[1503]: time="2025-03-17T18:38:39.371446945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:38:39.371960 containerd[1503]: time="2025-03-17T18:38:39.371829639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:38:39.371960 containerd[1503]: time="2025-03-17T18:38:39.371852305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:39.373408 containerd[1503]: time="2025-03-17T18:38:39.372192092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:39.374765 containerd[1503]: time="2025-03-17T18:38:39.371112227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:38:39.374765 containerd[1503]: time="2025-03-17T18:38:39.373848483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:38:39.374765 containerd[1503]: time="2025-03-17T18:38:39.373859255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:39.374765 containerd[1503]: time="2025-03-17T18:38:39.373921691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:39.414759 kubelet[2014]: E0317 18:38:39.414638 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:39.434857 systemd[1]: Started cri-containerd-ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741.scope - libcontainer container ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741. Mar 17 18:38:39.439134 systemd[1]: Started cri-containerd-02ba6465535839b04b3097c0f777708a661f4c427af89d8fbf693e7ab382f081.scope - libcontainer container 02ba6465535839b04b3097c0f777708a661f4c427af89d8fbf693e7ab382f081. Mar 17 18:38:39.463234 containerd[1503]: time="2025-03-17T18:38:39.463119422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69pgb,Uid:9830dfb6-6956-4ec8-b00b-eb4f0aa4833e,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741\"" Mar 17 18:38:39.465685 containerd[1503]: time="2025-03-17T18:38:39.465429706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 18:38:39.473848 containerd[1503]: time="2025-03-17T18:38:39.473808267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tvjs6,Uid:5b07f99a-e2e6-4cbc-828f-9c3ee248f2a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"02ba6465535839b04b3097c0f777708a661f4c427af89d8fbf693e7ab382f081\"" Mar 17 18:38:40.414890 kubelet[2014]: E0317 18:38:40.414794 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:40.490057 kubelet[2014]: E0317 18:38:40.489990 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:41.415024 kubelet[2014]: E0317 18:38:41.414984 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:41.547021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840196308.mount: Deactivated successfully. Mar 17 18:38:41.617322 containerd[1503]: time="2025-03-17T18:38:41.617253286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:41.618443 containerd[1503]: time="2025-03-17T18:38:41.618396589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 17 18:38:41.619454 containerd[1503]: time="2025-03-17T18:38:41.619412377Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:41.621107 containerd[1503]: time="2025-03-17T18:38:41.621086912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:41.621903 containerd[1503]: time="2025-03-17T18:38:41.621519163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.156058827s" Mar 17 18:38:41.621903 containerd[1503]: time="2025-03-17T18:38:41.621554995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 18:38:41.623126 containerd[1503]: time="2025-03-17T18:38:41.623093626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:38:41.624167 containerd[1503]: time="2025-03-17T18:38:41.624007127Z" level=info msg="CreateContainer within sandbox \"ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 18:38:41.649206 containerd[1503]: time="2025-03-17T18:38:41.649156931Z" level=info msg="CreateContainer within sandbox \"ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02\"" Mar 17 18:38:41.649952 containerd[1503]: time="2025-03-17T18:38:41.649880750Z" level=info msg="StartContainer for \"7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02\"" Mar 17 18:38:41.677851 systemd[1]: Started cri-containerd-7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02.scope - libcontainer container 7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02. Mar 17 18:38:41.710947 containerd[1503]: time="2025-03-17T18:38:41.710894601Z" level=info msg="StartContainer for \"7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02\" returns successfully" Mar 17 18:38:41.726176 systemd[1]: cri-containerd-7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02.scope: Deactivated successfully. Mar 17 18:38:41.760507 containerd[1503]: time="2025-03-17T18:38:41.760436474Z" level=info msg="shim disconnected" id=7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02 namespace=k8s.io Mar 17 18:38:41.760507 containerd[1503]: time="2025-03-17T18:38:41.760491004Z" level=warning msg="cleaning up after shim disconnected" id=7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02 namespace=k8s.io Mar 17 18:38:41.760507 containerd[1503]: time="2025-03-17T18:38:41.760499009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:38:42.415156 kubelet[2014]: E0317 18:38:42.415123 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:42.492032 kubelet[2014]: E0317 18:38:42.491507 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:42.522467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7901de4fa2a1b26241cbc0bade2fe743e28f06de5d489bb01f164ae893937d02-rootfs.mount: Deactivated successfully. Mar 17 18:38:42.661773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167512197.mount: Deactivated successfully. Mar 17 18:38:42.963025 containerd[1503]: time="2025-03-17T18:38:42.962944224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:42.963944 containerd[1503]: time="2025-03-17T18:38:42.963899496Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354658" Mar 17 18:38:42.964741 containerd[1503]: time="2025-03-17T18:38:42.964684715Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:42.966303 containerd[1503]: time="2025-03-17T18:38:42.966284684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:42.967108 containerd[1503]: time="2025-03-17T18:38:42.966755049Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 1.343633998s" Mar 17 18:38:42.967108 containerd[1503]: time="2025-03-17T18:38:42.966793908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 18:38:42.968509 containerd[1503]: time="2025-03-17T18:38:42.968474107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 18:38:42.969227 containerd[1503]: time="2025-03-17T18:38:42.969192152Z" level=info msg="CreateContainer within sandbox \"02ba6465535839b04b3097c0f777708a661f4c427af89d8fbf693e7ab382f081\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:38:42.992884 containerd[1503]: time="2025-03-17T18:38:42.992819919Z" level=info msg="CreateContainer within sandbox \"02ba6465535839b04b3097c0f777708a661f4c427af89d8fbf693e7ab382f081\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d194d4bf2bd27bb3fe16e03e86178af8f263afa2ef05d50d8065bcc22a7289d1\"" Mar 17 18:38:42.994748 containerd[1503]: time="2025-03-17T18:38:42.993351319Z" level=info msg="StartContainer for \"d194d4bf2bd27bb3fe16e03e86178af8f263afa2ef05d50d8065bcc22a7289d1\"" Mar 17 18:38:43.024917 systemd[1]: Started cri-containerd-d194d4bf2bd27bb3fe16e03e86178af8f263afa2ef05d50d8065bcc22a7289d1.scope - libcontainer container d194d4bf2bd27bb3fe16e03e86178af8f263afa2ef05d50d8065bcc22a7289d1. Mar 17 18:38:43.055896 containerd[1503]: time="2025-03-17T18:38:43.055850258Z" level=info msg="StartContainer for \"d194d4bf2bd27bb3fe16e03e86178af8f263afa2ef05d50d8065bcc22a7289d1\" returns successfully" Mar 17 18:38:43.415640 kubelet[2014]: E0317 18:38:43.415591 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:43.515952 kubelet[2014]: I0317 18:38:43.515881 2014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tvjs6" podStartSLOduration=4.023493341 podStartE2EDuration="7.515865748s" podCreationTimestamp="2025-03-17 18:38:36 +0000 UTC" firstStartedPulling="2025-03-17 18:38:39.475380979 +0000 UTC m=+3.422797643" lastFinishedPulling="2025-03-17 18:38:42.967753388 +0000 UTC m=+6.915170050" observedRunningTime="2025-03-17 18:38:43.51498029 +0000 UTC m=+7.462396963" watchObservedRunningTime="2025-03-17 18:38:43.515865748 +0000 UTC m=+7.463282421" Mar 17 18:38:44.416476 kubelet[2014]: E0317 18:38:44.416402 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:44.490048 kubelet[2014]: E0317 18:38:44.489731 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:45.416622 kubelet[2014]: E0317 18:38:45.416588 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:45.461044 systemd[1]: Started sshd@7-37.27.32.129:22-61.206.202.179:57184.service - OpenSSH per-connection server daemon (61.206.202.179:57184). Mar 17 18:38:46.417977 kubelet[2014]: E0317 18:38:46.417893 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:46.493130 kubelet[2014]: E0317 18:38:46.493090 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:47.418056 kubelet[2014]: E0317 18:38:47.418021 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:48.249095 containerd[1503]: time="2025-03-17T18:38:48.249022677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:48.250355 containerd[1503]: time="2025-03-17T18:38:48.250308126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 18:38:48.251530 containerd[1503]: time="2025-03-17T18:38:48.251488796Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:48.253828 containerd[1503]: time="2025-03-17T18:38:48.253750120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:48.254781 containerd[1503]: time="2025-03-17T18:38:48.254187190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 5.285683372s" Mar 17 18:38:48.254781 containerd[1503]: time="2025-03-17T18:38:48.254213662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 18:38:48.256825 containerd[1503]: time="2025-03-17T18:38:48.256767187Z" level=info msg="CreateContainer within sandbox \"ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 18:38:48.271722 containerd[1503]: time="2025-03-17T18:38:48.271660992Z" level=info msg="CreateContainer within sandbox \"ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142\"" Mar 17 18:38:48.272385 containerd[1503]: time="2025-03-17T18:38:48.272340225Z" level=info msg="StartContainer for \"9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142\"" Mar 17 18:38:48.302891 systemd[1]: Started cri-containerd-9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142.scope - libcontainer container 9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142. Mar 17 18:38:48.335113 containerd[1503]: time="2025-03-17T18:38:48.335026703Z" level=info msg="StartContainer for \"9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142\" returns successfully" Mar 17 18:38:48.418727 kubelet[2014]: E0317 18:38:48.418651 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:48.491744 kubelet[2014]: E0317 18:38:48.491495 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:48.770309 containerd[1503]: time="2025-03-17T18:38:48.770256898Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:38:48.772383 sshd[2421]: maximum authentication attempts exceeded for root from 61.206.202.179 port 57184 ssh2 [preauth] Mar 17 18:38:48.772383 sshd[2421]: Disconnecting authenticating user root 61.206.202.179 port 57184: Too many authentication failures [preauth] Mar 17 18:38:48.773691 systemd[1]: cri-containerd-9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142.scope: Deactivated successfully. Mar 17 18:38:48.775568 systemd[1]: sshd@7-37.27.32.129:22-61.206.202.179:57184.service: Deactivated successfully. Mar 17 18:38:48.793015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142-rootfs.mount: Deactivated successfully. Mar 17 18:38:48.838081 containerd[1503]: time="2025-03-17T18:38:48.838023831Z" level=info msg="shim disconnected" id=9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142 namespace=k8s.io Mar 17 18:38:48.838364 containerd[1503]: time="2025-03-17T18:38:48.838312918Z" level=warning msg="cleaning up after shim disconnected" id=9d09aa806e923ae9feed206fe5f03969883ec32de24037248532fd1105799142 namespace=k8s.io Mar 17 18:38:48.838489 containerd[1503]: time="2025-03-17T18:38:48.838354341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:38:48.869221 kubelet[2014]: I0317 18:38:48.869180 2014 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:38:49.346007 systemd[1]: Started sshd@8-37.27.32.129:22-61.206.202.179:57836.service - OpenSSH per-connection server daemon (61.206.202.179:57836). Mar 17 18:38:49.419475 kubelet[2014]: E0317 18:38:49.419428 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:49.528078 containerd[1503]: time="2025-03-17T18:38:49.527997160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 18:38:50.419861 kubelet[2014]: E0317 18:38:50.419774 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:50.495291 systemd[1]: Created slice kubepods-besteffort-pode605fe16_99a5_491e_bb58_ea92f025ef2b.slice - libcontainer container kubepods-besteffort-pode605fe16_99a5_491e_bb58_ea92f025ef2b.slice. Mar 17 18:38:50.497732 containerd[1503]: time="2025-03-17T18:38:50.497685288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:0,}" Mar 17 18:38:50.555294 containerd[1503]: time="2025-03-17T18:38:50.555225305Z" level=error msg="Failed to destroy network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:50.556155 containerd[1503]: time="2025-03-17T18:38:50.555574338Z" level=error msg="encountered an error cleaning up failed sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:50.556155 containerd[1503]: time="2025-03-17T18:38:50.555627334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:50.556939 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b-shm.mount: Deactivated successfully. Mar 17 18:38:50.557865 kubelet[2014]: E0317 18:38:50.557350 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:50.557865 kubelet[2014]: E0317 18:38:50.557423 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:50.557865 kubelet[2014]: E0317 18:38:50.557443 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:50.558029 kubelet[2014]: E0317 18:38:50.557490 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:51.420527 kubelet[2014]: E0317 18:38:51.420438 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:51.531132 kubelet[2014]: I0317 18:38:51.530910 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b" Mar 17 18:38:51.531629 containerd[1503]: time="2025-03-17T18:38:51.531591674Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:38:51.531822 containerd[1503]: time="2025-03-17T18:38:51.531806110Z" level=info msg="Ensure that sandbox 9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b in task-service has been cleanup successfully" Mar 17 18:38:51.533547 containerd[1503]: time="2025-03-17T18:38:51.532026867Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:38:51.533547 containerd[1503]: time="2025-03-17T18:38:51.532042798Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:38:51.533596 systemd[1]: run-netns-cni\x2d4075b188\x2d168b\x2d99cc\x2dca74\x2dbcedea9965ad.mount: Deactivated successfully. Mar 17 18:38:51.534741 containerd[1503]: time="2025-03-17T18:38:51.534236266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:1,}" Mar 17 18:38:51.595017 containerd[1503]: time="2025-03-17T18:38:51.594969950Z" level=error msg="Failed to destroy network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:51.596375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777-shm.mount: Deactivated successfully. Mar 17 18:38:51.596988 containerd[1503]: time="2025-03-17T18:38:51.596958904Z" level=error msg="encountered an error cleaning up failed sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:51.597049 containerd[1503]: time="2025-03-17T18:38:51.597014984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:51.597605 kubelet[2014]: E0317 18:38:51.597236 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:51.597605 kubelet[2014]: E0317 18:38:51.597293 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:51.597605 kubelet[2014]: E0317 18:38:51.597311 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:51.597721 kubelet[2014]: E0317 18:38:51.597346 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:52.421191 kubelet[2014]: E0317 18:38:52.421126 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:52.534475 kubelet[2014]: I0317 18:38:52.534436 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777" Mar 17 18:38:52.535663 containerd[1503]: time="2025-03-17T18:38:52.535361365Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:38:52.535739 containerd[1503]: time="2025-03-17T18:38:52.535725987Z" level=info msg="Ensure that sandbox 343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777 in task-service has been cleanup successfully" Mar 17 18:38:52.538629 containerd[1503]: time="2025-03-17T18:38:52.537603794Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:38:52.538629 containerd[1503]: time="2025-03-17T18:38:52.537646518Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:38:52.538285 systemd[1]: run-netns-cni\x2de16e35f3\x2d3e34\x2d3869\x2d97e0\x2d83c96b53dd60.mount: Deactivated successfully. Mar 17 18:38:52.540171 containerd[1503]: time="2025-03-17T18:38:52.538803559Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:38:52.540171 containerd[1503]: time="2025-03-17T18:38:52.538883176Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:38:52.540171 containerd[1503]: time="2025-03-17T18:38:52.538893528Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:38:52.540417 containerd[1503]: time="2025-03-17T18:38:52.540389298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:2,}" Mar 17 18:38:52.600264 containerd[1503]: time="2025-03-17T18:38:52.600210207Z" level=error msg="Failed to destroy network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:52.601160 containerd[1503]: time="2025-03-17T18:38:52.600965573Z" level=error msg="encountered an error cleaning up failed sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:52.601160 containerd[1503]: time="2025-03-17T18:38:52.601068967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:52.602183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb-shm.mount: Deactivated successfully. Mar 17 18:38:52.602729 kubelet[2014]: E0317 18:38:52.602286 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:52.602729 kubelet[2014]: E0317 18:38:52.602339 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:52.602729 kubelet[2014]: E0317 18:38:52.602359 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:52.602818 kubelet[2014]: E0317 18:38:52.602391 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:52.681944 sshd[2494]: maximum authentication attempts exceeded for root from 61.206.202.179 port 57836 ssh2 [preauth] Mar 17 18:38:52.683020 sshd[2494]: Disconnecting authenticating user root 61.206.202.179 port 57836: Too many authentication failures [preauth] Mar 17 18:38:52.684248 systemd[1]: sshd@8-37.27.32.129:22-61.206.202.179:57836.service: Deactivated successfully. Mar 17 18:38:53.252256 systemd[1]: Started sshd@9-37.27.32.129:22-61.206.202.179:58506.service - OpenSSH per-connection server daemon (61.206.202.179:58506). Mar 17 18:38:53.422063 kubelet[2014]: E0317 18:38:53.422010 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:53.539832 kubelet[2014]: I0317 18:38:53.539576 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb" Mar 17 18:38:53.541074 containerd[1503]: time="2025-03-17T18:38:53.540994338Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:38:53.541489 containerd[1503]: time="2025-03-17T18:38:53.541437684Z" level=info msg="Ensure that sandbox d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb in task-service has been cleanup successfully" Mar 17 18:38:53.543862 containerd[1503]: time="2025-03-17T18:38:53.543831697Z" level=info msg="TearDown network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" successfully" Mar 17 18:38:53.544037 containerd[1503]: time="2025-03-17T18:38:53.543936484Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" returns successfully" Mar 17 18:38:53.544773 containerd[1503]: time="2025-03-17T18:38:53.544578072Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:38:53.544773 containerd[1503]: time="2025-03-17T18:38:53.544690655Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:38:53.544773 containerd[1503]: time="2025-03-17T18:38:53.544719572Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:38:53.545634 containerd[1503]: time="2025-03-17T18:38:53.545126697Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:38:53.545634 containerd[1503]: time="2025-03-17T18:38:53.545205853Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:38:53.545634 containerd[1503]: time="2025-03-17T18:38:53.545216263Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:38:53.545957 containerd[1503]: time="2025-03-17T18:38:53.545936748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:3,}" Mar 17 18:38:53.546184 systemd[1]: run-netns-cni\x2ddb3e3dc7\x2d6751\x2d6a38\x2d41c9\x2da17d74b61514.mount: Deactivated successfully. Mar 17 18:38:53.618129 containerd[1503]: time="2025-03-17T18:38:53.618076941Z" level=error msg="Failed to destroy network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:53.620143 containerd[1503]: time="2025-03-17T18:38:53.619802913Z" level=error msg="encountered an error cleaning up failed sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:53.620143 containerd[1503]: time="2025-03-17T18:38:53.619859716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:53.620243 kubelet[2014]: E0317 18:38:53.620047 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:53.620243 kubelet[2014]: E0317 18:38:53.620097 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:53.620243 kubelet[2014]: E0317 18:38:53.620120 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:53.619910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538-shm.mount: Deactivated successfully. Mar 17 18:38:53.620442 kubelet[2014]: E0317 18:38:53.620155 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:54.422233 kubelet[2014]: E0317 18:38:54.422136 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:54.544994 kubelet[2014]: I0317 18:38:54.544926 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538" Mar 17 18:38:54.546448 containerd[1503]: time="2025-03-17T18:38:54.546354321Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" Mar 17 18:38:54.546966 containerd[1503]: time="2025-03-17T18:38:54.546911632Z" level=info msg="Ensure that sandbox ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538 in task-service has been cleanup successfully" Mar 17 18:38:54.549166 containerd[1503]: time="2025-03-17T18:38:54.549112036Z" level=info msg="TearDown network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" successfully" Mar 17 18:38:54.549261 containerd[1503]: time="2025-03-17T18:38:54.549169509Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" returns successfully" Mar 17 18:38:54.549627 containerd[1503]: time="2025-03-17T18:38:54.549598006Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:38:54.550083 containerd[1503]: time="2025-03-17T18:38:54.549965000Z" level=info msg="TearDown network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" successfully" Mar 17 18:38:54.550083 containerd[1503]: time="2025-03-17T18:38:54.550002915Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" returns successfully" Mar 17 18:38:54.550597 containerd[1503]: time="2025-03-17T18:38:54.550414518Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:38:54.550597 containerd[1503]: time="2025-03-17T18:38:54.550530968Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:38:54.550597 containerd[1503]: time="2025-03-17T18:38:54.550542751Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:38:54.552254 containerd[1503]: time="2025-03-17T18:38:54.552089554Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:38:54.552254 containerd[1503]: time="2025-03-17T18:38:54.552191507Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:38:54.552254 containerd[1503]: time="2025-03-17T18:38:54.552202087Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:38:54.553236 containerd[1503]: time="2025-03-17T18:38:54.552699689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:4,}" Mar 17 18:38:54.553204 systemd[1]: run-netns-cni\x2ddbde388e\x2d6ee4\x2dddf9\x2dbf3d\x2d3891ac899313.mount: Deactivated successfully. Mar 17 18:38:54.676608 containerd[1503]: time="2025-03-17T18:38:54.674520100Z" level=error msg="Failed to destroy network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:54.676243 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274-shm.mount: Deactivated successfully. Mar 17 18:38:54.678256 containerd[1503]: time="2025-03-17T18:38:54.677360588Z" level=error msg="encountered an error cleaning up failed sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:54.678256 containerd[1503]: time="2025-03-17T18:38:54.677451197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:54.678336 kubelet[2014]: E0317 18:38:54.677688 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:54.678336 kubelet[2014]: E0317 18:38:54.677790 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:54.678336 kubelet[2014]: E0317 18:38:54.677808 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:54.678480 kubelet[2014]: E0317 18:38:54.677852 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:55.258088 systemd[1]: Created slice kubepods-besteffort-pod548720f2_9719_430d_bc46_d413201658f3.slice - libcontainer container kubepods-besteffort-pod548720f2_9719_430d_bc46_d413201658f3.slice. Mar 17 18:38:55.348565 kubelet[2014]: I0317 18:38:55.348529 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88xd8\" (UniqueName: \"kubernetes.io/projected/548720f2-9719-430d-bc46-d413201658f3-kube-api-access-88xd8\") pod \"nginx-deployment-8587fbcb89-k9bfm\" (UID: \"548720f2-9719-430d-bc46-d413201658f3\") " pod="default/nginx-deployment-8587fbcb89-k9bfm" Mar 17 18:38:55.424025 kubelet[2014]: E0317 18:38:55.423976 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:55.549549 kubelet[2014]: I0317 18:38:55.549364 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274" Mar 17 18:38:55.551406 containerd[1503]: time="2025-03-17T18:38:55.550404942Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\"" Mar 17 18:38:55.551406 containerd[1503]: time="2025-03-17T18:38:55.550580208Z" level=info msg="Ensure that sandbox 8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274 in task-service has been cleanup successfully" Mar 17 18:38:55.551406 containerd[1503]: time="2025-03-17T18:38:55.550988221Z" level=info msg="TearDown network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" successfully" Mar 17 18:38:55.551406 containerd[1503]: time="2025-03-17T18:38:55.551026247Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" returns successfully" Mar 17 18:38:55.551406 containerd[1503]: time="2025-03-17T18:38:55.551297561Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" Mar 17 18:38:55.551742 containerd[1503]: time="2025-03-17T18:38:55.551474120Z" level=info msg="TearDown network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" successfully" Mar 17 18:38:55.551742 containerd[1503]: time="2025-03-17T18:38:55.551485191Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" returns successfully" Mar 17 18:38:55.552732 containerd[1503]: time="2025-03-17T18:38:55.551888707Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:38:55.552732 containerd[1503]: time="2025-03-17T18:38:55.551998584Z" level=info msg="TearDown network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" successfully" Mar 17 18:38:55.552732 containerd[1503]: time="2025-03-17T18:38:55.552008333Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" returns successfully" Mar 17 18:38:55.552732 containerd[1503]: time="2025-03-17T18:38:55.552455344Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:38:55.552732 containerd[1503]: time="2025-03-17T18:38:55.552542777Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:38:55.552732 containerd[1503]: time="2025-03-17T18:38:55.552552447Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:38:55.552884 containerd[1503]: time="2025-03-17T18:38:55.552813842Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:38:55.552961 containerd[1503]: time="2025-03-17T18:38:55.552901324Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:38:55.552961 containerd[1503]: time="2025-03-17T18:38:55.552917946Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:38:55.554765 containerd[1503]: time="2025-03-17T18:38:55.553567287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:5,}" Mar 17 18:38:55.557843 systemd[1]: run-netns-cni\x2d05c98782\x2d5722\x2decf4\x2d531f\x2d50b757910886.mount: Deactivated successfully. Mar 17 18:38:55.562794 containerd[1503]: time="2025-03-17T18:38:55.562480062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k9bfm,Uid:548720f2-9719-430d-bc46-d413201658f3,Namespace:default,Attempt:0,}" Mar 17 18:38:55.657896 containerd[1503]: time="2025-03-17T18:38:55.657839701Z" level=error msg="Failed to destroy network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.658719 containerd[1503]: time="2025-03-17T18:38:55.658666941Z" level=error msg="encountered an error cleaning up failed sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.658866 containerd[1503]: time="2025-03-17T18:38:55.658766137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.659008 kubelet[2014]: E0317 18:38:55.658976 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.659160 kubelet[2014]: E0317 18:38:55.659025 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:55.659160 kubelet[2014]: E0317 18:38:55.659055 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:55.659160 kubelet[2014]: E0317 18:38:55.659091 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:55.676069 containerd[1503]: time="2025-03-17T18:38:55.676044677Z" level=error msg="Failed to destroy network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.676441 containerd[1503]: time="2025-03-17T18:38:55.676420097Z" level=error msg="encountered an error cleaning up failed sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.676576 containerd[1503]: time="2025-03-17T18:38:55.676541938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k9bfm,Uid:548720f2-9719-430d-bc46-d413201658f3,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.677020 kubelet[2014]: E0317 18:38:55.676850 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:55.677020 kubelet[2014]: E0317 18:38:55.676906 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-k9bfm" Mar 17 18:38:55.677020 kubelet[2014]: E0317 18:38:55.676927 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-k9bfm" Mar 17 18:38:55.677146 kubelet[2014]: E0317 18:38:55.676961 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-k9bfm_default(548720f2-9719-430d-bc46-d413201658f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-k9bfm_default(548720f2-9719-430d-bc46-d413201658f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-k9bfm" podUID="548720f2-9719-430d-bc46-d413201658f3" Mar 17 18:38:56.409530 kubelet[2014]: E0317 18:38:56.409482 2014 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:56.424088 kubelet[2014]: E0317 18:38:56.424064 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:56.551940 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e-shm.mount: Deactivated successfully. Mar 17 18:38:56.552296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16-shm.mount: Deactivated successfully. Mar 17 18:38:56.556010 kubelet[2014]: I0317 18:38:56.555962 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16" Mar 17 18:38:56.557003 containerd[1503]: time="2025-03-17T18:38:56.556425115Z" level=info msg="StopPodSandbox for \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\"" Mar 17 18:38:56.559849 containerd[1503]: time="2025-03-17T18:38:56.558234208Z" level=info msg="Ensure that sandbox a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16 in task-service has been cleanup successfully" Mar 17 18:38:56.559621 systemd[1]: run-netns-cni\x2d59e772dd\x2d9384\x2d3b30\x2df8e0\x2d8c486e12737a.mount: Deactivated successfully. Mar 17 18:38:56.560447 kubelet[2014]: I0317 18:38:56.560013 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e" Mar 17 18:38:56.560477 containerd[1503]: time="2025-03-17T18:38:56.560345285Z" level=info msg="StopPodSandbox for \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\"" Mar 17 18:38:56.560836 containerd[1503]: time="2025-03-17T18:38:56.560513917Z" level=info msg="Ensure that sandbox 4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e in task-service has been cleanup successfully" Mar 17 18:38:56.560836 containerd[1503]: time="2025-03-17T18:38:56.560583865Z" level=info msg="TearDown network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" successfully" Mar 17 18:38:56.560836 containerd[1503]: time="2025-03-17T18:38:56.560600558Z" level=info msg="StopPodSandbox for \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" returns successfully" Mar 17 18:38:56.560937 containerd[1503]: time="2025-03-17T18:38:56.560918865Z" level=info msg="TearDown network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" successfully" Mar 17 18:38:56.560937 containerd[1503]: time="2025-03-17T18:38:56.560933634Z" level=info msg="StopPodSandbox for \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" returns successfully" Mar 17 18:38:56.562758 containerd[1503]: time="2025-03-17T18:38:56.562107826Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\"" Mar 17 18:38:56.562758 containerd[1503]: time="2025-03-17T18:38:56.562182333Z" level=info msg="TearDown network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" successfully" Mar 17 18:38:56.562758 containerd[1503]: time="2025-03-17T18:38:56.562191612Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" returns successfully" Mar 17 18:38:56.562758 containerd[1503]: time="2025-03-17T18:38:56.562262871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k9bfm,Uid:548720f2-9719-430d-bc46-d413201658f3,Namespace:default,Attempt:1,}" Mar 17 18:38:56.563496 systemd[1]: run-netns-cni\x2da39cdaf1\x2dee62\x2d5963\x2d7532\x2d2741f4dbcdb0.mount: Deactivated successfully. Mar 17 18:38:56.564938 containerd[1503]: time="2025-03-17T18:38:56.564190969Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" Mar 17 18:38:56.564938 containerd[1503]: time="2025-03-17T18:38:56.564274212Z" level=info msg="TearDown network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" successfully" Mar 17 18:38:56.564938 containerd[1503]: time="2025-03-17T18:38:56.564284162Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" returns successfully" Mar 17 18:38:56.565688 containerd[1503]: time="2025-03-17T18:38:56.565656806Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:38:56.565793 containerd[1503]: time="2025-03-17T18:38:56.565757475Z" level=info msg="TearDown network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" successfully" Mar 17 18:38:56.565793 containerd[1503]: time="2025-03-17T18:38:56.565773265Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" returns successfully" Mar 17 18:38:56.566889 containerd[1503]: time="2025-03-17T18:38:56.566510588Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:38:56.566889 containerd[1503]: time="2025-03-17T18:38:56.566604511Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:38:56.566889 containerd[1503]: time="2025-03-17T18:38:56.566622187Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:38:56.567261 containerd[1503]: time="2025-03-17T18:38:56.567243891Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:38:56.567373 containerd[1503]: time="2025-03-17T18:38:56.567358778Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:38:56.567424 containerd[1503]: time="2025-03-17T18:38:56.567412744Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:38:56.568095 containerd[1503]: time="2025-03-17T18:38:56.568060159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:6,}" Mar 17 18:38:56.592537 sshd[2589]: maximum authentication attempts exceeded for root from 61.206.202.179 port 58506 ssh2 [preauth] Mar 17 18:38:56.592537 sshd[2589]: Disconnecting authenticating user root 61.206.202.179 port 58506: Too many authentication failures [preauth] Mar 17 18:38:56.597826 systemd[1]: sshd@9-37.27.32.129:22-61.206.202.179:58506.service: Deactivated successfully. Mar 17 18:38:56.683471 containerd[1503]: time="2025-03-17T18:38:56.683358474Z" level=error msg="Failed to destroy network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.684163 containerd[1503]: time="2025-03-17T18:38:56.684140512Z" level=error msg="encountered an error cleaning up failed sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.684504 containerd[1503]: time="2025-03-17T18:38:56.684253346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.684504 containerd[1503]: time="2025-03-17T18:38:56.684414362Z" level=error msg="Failed to destroy network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.684572 kubelet[2014]: E0317 18:38:56.684442 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.684572 kubelet[2014]: E0317 18:38:56.684487 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:56.684572 kubelet[2014]: E0317 18:38:56.684514 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hgspb" Mar 17 18:38:56.684694 kubelet[2014]: E0317 18:38:56.684551 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hgspb_calico-system(e605fe16-99a5-491e-bb58-ea92f025ef2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hgspb" podUID="e605fe16-99a5-491e-bb58-ea92f025ef2b" Mar 17 18:38:56.685068 containerd[1503]: time="2025-03-17T18:38:56.684970438Z" level=error msg="encountered an error cleaning up failed sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.685068 containerd[1503]: time="2025-03-17T18:38:56.685005737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k9bfm,Uid:548720f2-9719-430d-bc46-d413201658f3,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.685181 kubelet[2014]: E0317 18:38:56.685143 2014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:38:56.685212 kubelet[2014]: E0317 18:38:56.685191 2014 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-k9bfm" Mar 17 18:38:56.685212 kubelet[2014]: E0317 18:38:56.685208 2014 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-k9bfm" Mar 17 18:38:56.685256 kubelet[2014]: E0317 18:38:56.685232 2014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-k9bfm_default(548720f2-9719-430d-bc46-d413201658f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-k9bfm_default(548720f2-9719-430d-bc46-d413201658f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-k9bfm" podUID="548720f2-9719-430d-bc46-d413201658f3" Mar 17 18:38:56.836217 containerd[1503]: time="2025-03-17T18:38:56.836159053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:56.836873 containerd[1503]: time="2025-03-17T18:38:56.836770357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 18:38:56.838235 containerd[1503]: time="2025-03-17T18:38:56.837586454Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:56.839419 containerd[1503]: time="2025-03-17T18:38:56.839380036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:38:56.840155 containerd[1503]: time="2025-03-17T18:38:56.839831385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 7.311784206s" Mar 17 18:38:56.840155 containerd[1503]: time="2025-03-17T18:38:56.839864691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 18:38:56.854254 containerd[1503]: time="2025-03-17T18:38:56.854204441Z" level=info msg="CreateContainer within sandbox \"ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 18:38:56.866082 containerd[1503]: time="2025-03-17T18:38:56.866043786Z" level=info msg="CreateContainer within sandbox \"ab0b2883c29e5b291ee7c1e4dc714727e83741516151d666e7769aafb1959741\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0c3fa42cdbdee956a95c62cfe9535e5eb77ced50cb0f59dfba4f56a3db3b8b7a\"" Mar 17 18:38:56.866485 containerd[1503]: time="2025-03-17T18:38:56.866438614Z" level=info msg="StartContainer for \"0c3fa42cdbdee956a95c62cfe9535e5eb77ced50cb0f59dfba4f56a3db3b8b7a\"" Mar 17 18:38:56.929835 systemd[1]: Started cri-containerd-0c3fa42cdbdee956a95c62cfe9535e5eb77ced50cb0f59dfba4f56a3db3b8b7a.scope - libcontainer container 0c3fa42cdbdee956a95c62cfe9535e5eb77ced50cb0f59dfba4f56a3db3b8b7a. Mar 17 18:38:56.964980 containerd[1503]: time="2025-03-17T18:38:56.964879691Z" level=info msg="StartContainer for \"0c3fa42cdbdee956a95c62cfe9535e5eb77ced50cb0f59dfba4f56a3db3b8b7a\" returns successfully" Mar 17 18:38:57.021164 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 18:38:57.021277 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 18:38:57.167700 systemd[1]: Started sshd@10-37.27.32.129:22-61.206.202.179:59168.service - OpenSSH per-connection server daemon (61.206.202.179:59168). Mar 17 18:38:57.424917 kubelet[2014]: E0317 18:38:57.424850 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:57.557620 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1-shm.mount: Deactivated successfully. Mar 17 18:38:57.557870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d-shm.mount: Deactivated successfully. Mar 17 18:38:57.558064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072094281.mount: Deactivated successfully. Mar 17 18:38:57.564817 kubelet[2014]: I0317 18:38:57.564759 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d" Mar 17 18:38:57.566036 containerd[1503]: time="2025-03-17T18:38:57.565947643Z" level=info msg="StopPodSandbox for \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\"" Mar 17 18:38:57.566429 containerd[1503]: time="2025-03-17T18:38:57.566384533Z" level=info msg="Ensure that sandbox 8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d in task-service has been cleanup successfully" Mar 17 18:38:57.568835 containerd[1503]: time="2025-03-17T18:38:57.568752410Z" level=info msg="TearDown network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\" successfully" Mar 17 18:38:57.568835 containerd[1503]: time="2025-03-17T18:38:57.568790014Z" level=info msg="StopPodSandbox for \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\" returns successfully" Mar 17 18:38:57.571092 containerd[1503]: time="2025-03-17T18:38:57.571049429Z" level=info msg="StopPodSandbox for \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\"" Mar 17 18:38:57.571340 containerd[1503]: time="2025-03-17T18:38:57.571200115Z" level=info msg="TearDown network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" successfully" Mar 17 18:38:57.571340 containerd[1503]: time="2025-03-17T18:38:57.571225335Z" level=info msg="StopPodSandbox for \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" returns successfully" Mar 17 18:38:57.572306 containerd[1503]: time="2025-03-17T18:38:57.571925341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k9bfm,Uid:548720f2-9719-430d-bc46-d413201658f3,Namespace:default,Attempt:2,}" Mar 17 18:38:57.573217 systemd[1]: run-netns-cni\x2dba470200\x2d3459\x2deb40\x2dda29\x2d58494fde6a2c.mount: Deactivated successfully. Mar 17 18:38:57.588799 kubelet[2014]: I0317 18:38:57.588415 2014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-69pgb" podStartSLOduration=4.212668988 podStartE2EDuration="21.588399275s" podCreationTimestamp="2025-03-17 18:38:36 +0000 UTC" firstStartedPulling="2025-03-17 18:38:39.464872388 +0000 UTC m=+3.412289051" lastFinishedPulling="2025-03-17 18:38:56.840602674 +0000 UTC m=+20.788019338" observedRunningTime="2025-03-17 18:38:57.58821845 +0000 UTC m=+21.535635123" watchObservedRunningTime="2025-03-17 18:38:57.588399275 +0000 UTC m=+21.535815948" Mar 17 18:38:57.589436 kubelet[2014]: I0317 18:38:57.589414 2014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1" Mar 17 18:38:57.590766 containerd[1503]: time="2025-03-17T18:38:57.590581186Z" level=info msg="StopPodSandbox for \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\"" Mar 17 18:38:57.592132 containerd[1503]: time="2025-03-17T18:38:57.590894282Z" level=info msg="Ensure that sandbox 27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1 in task-service has been cleanup successfully" Mar 17 18:38:57.592132 containerd[1503]: time="2025-03-17T18:38:57.591477210Z" level=info msg="TearDown network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\" successfully" Mar 17 18:38:57.592132 containerd[1503]: time="2025-03-17T18:38:57.591549972Z" level=info msg="StopPodSandbox for \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\" returns successfully" Mar 17 18:38:57.592226 containerd[1503]: time="2025-03-17T18:38:57.592155393Z" level=info msg="StopPodSandbox for \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\"" Mar 17 18:38:57.592378 containerd[1503]: time="2025-03-17T18:38:57.592320537Z" level=info msg="TearDown network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" successfully" Mar 17 18:38:57.592378 containerd[1503]: time="2025-03-17T18:38:57.592357702Z" level=info msg="StopPodSandbox for \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" returns successfully" Mar 17 18:38:57.595323 containerd[1503]: time="2025-03-17T18:38:57.595264108Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\"" Mar 17 18:38:57.595477 containerd[1503]: time="2025-03-17T18:38:57.595446947Z" level=info msg="TearDown network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" successfully" Mar 17 18:38:57.595514 containerd[1503]: time="2025-03-17T18:38:57.595475995Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" returns successfully" Mar 17 18:38:57.596636 systemd[1]: run-netns-cni\x2dfa1d3380\x2d4a23\x2d72eb\x2d73b3\x2d0797dd139730.mount: Deactivated successfully. Mar 17 18:38:57.597681 containerd[1503]: time="2025-03-17T18:38:57.597635471Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" Mar 17 18:38:57.597819 containerd[1503]: time="2025-03-17T18:38:57.597790267Z" level=info msg="TearDown network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" successfully" Mar 17 18:38:57.597819 containerd[1503]: time="2025-03-17T18:38:57.597815055Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" returns successfully" Mar 17 18:38:57.598313 containerd[1503]: time="2025-03-17T18:38:57.598272675Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:38:57.598645 containerd[1503]: time="2025-03-17T18:38:57.598613415Z" level=info msg="TearDown network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" successfully" Mar 17 18:38:57.598645 containerd[1503]: time="2025-03-17T18:38:57.598641050Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" returns successfully" Mar 17 18:38:57.599164 containerd[1503]: time="2025-03-17T18:38:57.599125494Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:38:57.599262 containerd[1503]: time="2025-03-17T18:38:57.599235810Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:38:57.599262 containerd[1503]: time="2025-03-17T18:38:57.599258365Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:38:57.603791 containerd[1503]: time="2025-03-17T18:38:57.603748125Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:38:57.603895 containerd[1503]: time="2025-03-17T18:38:57.603865216Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:38:57.603895 containerd[1503]: time="2025-03-17T18:38:57.603889323Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:38:57.607001 containerd[1503]: time="2025-03-17T18:38:57.606948170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:7,}" Mar 17 18:38:57.763164 systemd-networkd[1384]: calic4c3f66511c: Link UP Mar 17 18:38:57.763989 systemd-networkd[1384]: calic4c3f66511c: Gained carrier Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.650 [INFO][2883] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.686 [INFO][2883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--hgspb-eth0 csi-node-driver- calico-system e605fe16-99a5-491e-bb58-ea92f025ef2b 1613 0 2025-03-17 18:38:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-hgspb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4c3f66511c [] []}} ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.688 [INFO][2883] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.713 [INFO][2903] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" HandleID="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Workload="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.724 [INFO][2903] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" HandleID="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Workload="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-hgspb", "timestamp":"2025-03-17 18:38:57.713460922 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.724 [INFO][2903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.724 [INFO][2903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.724 [INFO][2903] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.726 [INFO][2903] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.729 [INFO][2903] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.733 [INFO][2903] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.735 [INFO][2903] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.738 [INFO][2903] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.738 [INFO][2903] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.740 [INFO][2903] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.744 [INFO][2903] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.749 [INFO][2903] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.749 [INFO][2903] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" host="10.0.0.4" Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.749 [INFO][2903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:38:57.775891 containerd[1503]: 2025-03-17 18:38:57.749 [INFO][2903] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" HandleID="k8s-pod-network.7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Workload="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" Mar 17 18:38:57.776342 containerd[1503]: 2025-03-17 18:38:57.752 [INFO][2883] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--hgspb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e605fe16-99a5-491e-bb58-ea92f025ef2b", ResourceVersion:"1613", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-hgspb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4c3f66511c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:38:57.776342 containerd[1503]: 2025-03-17 18:38:57.752 [INFO][2883] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" Mar 17 18:38:57.776342 containerd[1503]: 2025-03-17 18:38:57.752 [INFO][2883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4c3f66511c ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" Mar 17 18:38:57.776342 containerd[1503]: 2025-03-17 18:38:57.764 [INFO][2883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" Mar 17 18:38:57.776342 containerd[1503]: 2025-03-17 18:38:57.764 [INFO][2883] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--hgspb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e605fe16-99a5-491e-bb58-ea92f025ef2b", ResourceVersion:"1613", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c", Pod:"csi-node-driver-hgspb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4c3f66511c", MAC:"4a:d2:9b:42:46:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:38:57.776342 containerd[1503]: 2025-03-17 18:38:57.772 [INFO][2883] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c" Namespace="calico-system" Pod="csi-node-driver-hgspb" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--hgspb-eth0" Mar 17 18:38:57.793250 containerd[1503]: time="2025-03-17T18:38:57.792922875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:38:57.793250 containerd[1503]: time="2025-03-17T18:38:57.792973514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:38:57.793250 containerd[1503]: time="2025-03-17T18:38:57.792986729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:57.793250 containerd[1503]: time="2025-03-17T18:38:57.793196172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:57.811847 systemd[1]: Started cri-containerd-7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c.scope - libcontainer container 7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c. Mar 17 18:38:57.834183 containerd[1503]: time="2025-03-17T18:38:57.834108937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hgspb,Uid:e605fe16-99a5-491e-bb58-ea92f025ef2b,Namespace:calico-system,Attempt:7,} returns sandbox id \"7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c\"" Mar 17 18:38:57.836479 containerd[1503]: time="2025-03-17T18:38:57.836385066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 18:38:57.860994 systemd-networkd[1384]: cali15c0fc92586: Link UP Mar 17 18:38:57.861274 systemd-networkd[1384]: cali15c0fc92586: Gained carrier Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.664 [INFO][2863] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.686 [INFO][2863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0 nginx-deployment-8587fbcb89- default 548720f2-9719-430d-bc46-d413201658f3 1697 0 2025-03-17 18:38:55 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-8587fbcb89-k9bfm eth0 default [] [] [kns.default ksa.default.default] cali15c0fc92586 [] []}} ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.688 [INFO][2863] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.715 [INFO][2901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" HandleID="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.724 [INFO][2901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" HandleID="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-8587fbcb89-k9bfm", "timestamp":"2025-03-17 18:38:57.715496676 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.724 [INFO][2901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.749 [INFO][2901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.749 [INFO][2901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.829 [INFO][2901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.834 [INFO][2901] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.839 [INFO][2901] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.841 [INFO][2901] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.843 [INFO][2901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.843 [INFO][2901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.845 [INFO][2901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7 Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.851 [INFO][2901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.856 [INFO][2901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.856 [INFO][2901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" host="10.0.0.4" Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.856 [INFO][2901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:38:57.870485 containerd[1503]: 2025-03-17 18:38:57.856 [INFO][2901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" HandleID="k8s-pod-network.9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" Mar 17 18:38:57.871162 containerd[1503]: 2025-03-17 18:38:57.858 [INFO][2863] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"548720f2-9719-430d-bc46-d413201658f3", ResourceVersion:"1697", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-k9bfm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali15c0fc92586", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:38:57.871162 containerd[1503]: 2025-03-17 18:38:57.858 [INFO][2863] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" Mar 17 18:38:57.871162 containerd[1503]: 2025-03-17 18:38:57.858 [INFO][2863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15c0fc92586 ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" Mar 17 18:38:57.871162 containerd[1503]: 2025-03-17 18:38:57.860 [INFO][2863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" Mar 17 18:38:57.871162 containerd[1503]: 2025-03-17 18:38:57.860 [INFO][2863] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"548720f2-9719-430d-bc46-d413201658f3", ResourceVersion:"1697", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 38, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7", Pod:"nginx-deployment-8587fbcb89-k9bfm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali15c0fc92586", MAC:"0a:52:b7:45:7f:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:38:57.871162 containerd[1503]: 2025-03-17 18:38:57.867 [INFO][2863] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7" Namespace="default" Pod="nginx-deployment-8587fbcb89-k9bfm" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--k9bfm-eth0" Mar 17 18:38:57.889021 containerd[1503]: time="2025-03-17T18:38:57.888917437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:38:57.889021 containerd[1503]: time="2025-03-17T18:38:57.888977905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:38:57.889021 containerd[1503]: time="2025-03-17T18:38:57.888992515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:57.889250 containerd[1503]: time="2025-03-17T18:38:57.889061420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:57.907835 systemd[1]: Started cri-containerd-9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7.scope - libcontainer container 9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7. Mar 17 18:38:57.943521 containerd[1503]: time="2025-03-17T18:38:57.943446886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k9bfm,Uid:548720f2-9719-430d-bc46-d413201658f3,Namespace:default,Attempt:2,} returns sandbox id \"9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7\"" Mar 17 18:38:58.426318 kubelet[2014]: E0317 18:38:58.426281 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:58.476742 kernel: bpftool[3140]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 18:38:58.620133 systemd[1]: run-containerd-runc-k8s.io-0c3fa42cdbdee956a95c62cfe9535e5eb77ced50cb0f59dfba4f56a3db3b8b7a-runc.kZe9L3.mount: Deactivated successfully. Mar 17 18:38:58.721516 systemd-networkd[1384]: vxlan.calico: Link UP Mar 17 18:38:58.721526 systemd-networkd[1384]: vxlan.calico: Gained carrier Mar 17 18:38:58.962083 systemd-networkd[1384]: cali15c0fc92586: Gained IPv6LL Mar 17 18:38:59.165221 sshd[2841]: Received disconnect from 61.206.202.179 port 59168:11: disconnected by user [preauth] Mar 17 18:38:59.165221 sshd[2841]: Disconnected from authenticating user root 61.206.202.179 port 59168 [preauth] Mar 17 18:38:59.169441 systemd[1]: sshd@10-37.27.32.129:22-61.206.202.179:59168.service: Deactivated successfully. Mar 17 18:38:59.345939 systemd-networkd[1384]: calic4c3f66511c: Gained IPv6LL Mar 17 18:38:59.427479 kubelet[2014]: E0317 18:38:59.427304 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:38:59.462028 systemd[1]: Started sshd@11-37.27.32.129:22-61.206.202.179:59576.service - OpenSSH per-connection server daemon (61.206.202.179:59576). Mar 17 18:39:00.132291 containerd[1503]: time="2025-03-17T18:39:00.132235988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:00.133238 containerd[1503]: time="2025-03-17T18:39:00.133046707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 18:39:00.134037 containerd[1503]: time="2025-03-17T18:39:00.133986659Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:00.135598 containerd[1503]: time="2025-03-17T18:39:00.135577939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:00.136197 containerd[1503]: time="2025-03-17T18:39:00.136076786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.299662945s" Mar 17 18:39:00.136197 containerd[1503]: time="2025-03-17T18:39:00.136103589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 18:39:00.137479 containerd[1503]: time="2025-03-17T18:39:00.137313741Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:39:00.138402 containerd[1503]: time="2025-03-17T18:39:00.138204218Z" level=info msg="CreateContainer within sandbox \"7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 18:39:00.153659 containerd[1503]: time="2025-03-17T18:39:00.153616150Z" level=info msg="CreateContainer within sandbox \"7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1d97ceb23bfa0c0bc9ece578b360b000de1bbf0f234df7c704fc990538b0afed\"" Mar 17 18:39:00.154047 containerd[1503]: time="2025-03-17T18:39:00.153991005Z" level=info msg="StartContainer for \"1d97ceb23bfa0c0bc9ece578b360b000de1bbf0f234df7c704fc990538b0afed\"" Mar 17 18:39:00.185849 systemd[1]: Started cri-containerd-1d97ceb23bfa0c0bc9ece578b360b000de1bbf0f234df7c704fc990538b0afed.scope - libcontainer container 1d97ceb23bfa0c0bc9ece578b360b000de1bbf0f234df7c704fc990538b0afed. Mar 17 18:39:00.223484 containerd[1503]: time="2025-03-17T18:39:00.223425564Z" level=info msg="StartContainer for \"1d97ceb23bfa0c0bc9ece578b360b000de1bbf0f234df7c704fc990538b0afed\" returns successfully" Mar 17 18:39:00.428175 kubelet[2014]: E0317 18:39:00.428047 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:00.690871 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Mar 17 18:39:01.282243 sshd[3237]: Invalid user admin from 61.206.202.179 port 59576 Mar 17 18:39:01.429823 kubelet[2014]: E0317 18:39:01.429762 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:02.430670 kubelet[2014]: E0317 18:39:02.430637 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:02.699833 sshd[3237]: maximum authentication attempts exceeded for invalid user admin from 61.206.202.179 port 59576 ssh2 [preauth] Mar 17 18:39:02.699833 sshd[3237]: Disconnecting invalid user admin 61.206.202.179 port 59576: Too many authentication failures [preauth] Mar 17 18:39:02.701228 systemd[1]: sshd@11-37.27.32.129:22-61.206.202.179:59576.service: Deactivated successfully. Mar 17 18:39:03.110056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728482247.mount: Deactivated successfully. Mar 17 18:39:03.268797 systemd[1]: Started sshd@12-37.27.32.129:22-61.206.202.179:60196.service - OpenSSH per-connection server daemon (61.206.202.179:60196). Mar 17 18:39:03.431343 kubelet[2014]: E0317 18:39:03.431237 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:04.067831 containerd[1503]: time="2025-03-17T18:39:04.067765548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:04.068659 containerd[1503]: time="2025-03-17T18:39:04.068614555Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 17 18:39:04.069434 containerd[1503]: time="2025-03-17T18:39:04.069380932Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:04.071585 containerd[1503]: time="2025-03-17T18:39:04.071547402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:04.072452 containerd[1503]: time="2025-03-17T18:39:04.072279891Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 3.934942644s" Mar 17 18:39:04.072452 containerd[1503]: time="2025-03-17T18:39:04.072318856Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 18:39:04.074009 containerd[1503]: time="2025-03-17T18:39:04.073954781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 18:39:04.074985 containerd[1503]: time="2025-03-17T18:39:04.074944854Z" level=info msg="CreateContainer within sandbox \"9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 18:39:04.086349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302348403.mount: Deactivated successfully. Mar 17 18:39:04.090478 containerd[1503]: time="2025-03-17T18:39:04.090439177Z" level=info msg="CreateContainer within sandbox \"9637b82825dfb896465a167fafb9f3a78cc353cbbdc25b6901538c649c5c42f7\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d96828b0288910e7353702b47370687e98ed7463cfbb5941ffd63c0dbc4b1efa\"" Mar 17 18:39:04.091107 containerd[1503]: time="2025-03-17T18:39:04.091063595Z" level=info msg="StartContainer for \"d96828b0288910e7353702b47370687e98ed7463cfbb5941ffd63c0dbc4b1efa\"" Mar 17 18:39:04.116201 systemd[1]: run-containerd-runc-k8s.io-d96828b0288910e7353702b47370687e98ed7463cfbb5941ffd63c0dbc4b1efa-runc.Qojl2j.mount: Deactivated successfully. Mar 17 18:39:04.124858 systemd[1]: Started cri-containerd-d96828b0288910e7353702b47370687e98ed7463cfbb5941ffd63c0dbc4b1efa.scope - libcontainer container d96828b0288910e7353702b47370687e98ed7463cfbb5941ffd63c0dbc4b1efa. Mar 17 18:39:04.155836 containerd[1503]: time="2025-03-17T18:39:04.155776906Z" level=info msg="StartContainer for \"d96828b0288910e7353702b47370687e98ed7463cfbb5941ffd63c0dbc4b1efa\" returns successfully" Mar 17 18:39:04.432452 kubelet[2014]: E0317 18:39:04.432296 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:04.630738 kubelet[2014]: I0317 18:39:04.628899 2014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-k9bfm" podStartSLOduration=3.500003655 podStartE2EDuration="9.628883503s" podCreationTimestamp="2025-03-17 18:38:55 +0000 UTC" firstStartedPulling="2025-03-17 18:38:57.94466367 +0000 UTC m=+21.892080333" lastFinishedPulling="2025-03-17 18:39:04.073543518 +0000 UTC m=+28.020960181" observedRunningTime="2025-03-17 18:39:04.627910172 +0000 UTC m=+28.575326845" watchObservedRunningTime="2025-03-17 18:39:04.628883503 +0000 UTC m=+28.576300176" Mar 17 18:39:05.036589 sshd[3292]: Invalid user admin from 61.206.202.179 port 60196 Mar 17 18:39:05.433524 kubelet[2014]: E0317 18:39:05.433375 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:06.433998 kubelet[2014]: E0317 18:39:06.433952 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:06.450483 sshd[3292]: maximum authentication attempts exceeded for invalid user admin from 61.206.202.179 port 60196 ssh2 [preauth] Mar 17 18:39:06.450483 sshd[3292]: Disconnecting invalid user admin 61.206.202.179 port 60196: Too many authentication failures [preauth] Mar 17 18:39:06.453336 systemd[1]: sshd@12-37.27.32.129:22-61.206.202.179:60196.service: Deactivated successfully. Mar 17 18:39:06.748953 containerd[1503]: time="2025-03-17T18:39:06.748903347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:06.749853 containerd[1503]: time="2025-03-17T18:39:06.749813240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 18:39:06.750603 containerd[1503]: time="2025-03-17T18:39:06.750566207Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:06.752348 containerd[1503]: time="2025-03-17T18:39:06.752311148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:06.752924 containerd[1503]: time="2025-03-17T18:39:06.752774921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.678731538s" Mar 17 18:39:06.752924 containerd[1503]: time="2025-03-17T18:39:06.752800151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 18:39:06.754571 containerd[1503]: time="2025-03-17T18:39:06.754551665Z" level=info msg="CreateContainer within sandbox \"7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 18:39:06.770761 containerd[1503]: time="2025-03-17T18:39:06.770724794Z" level=info msg="CreateContainer within sandbox \"7f4ee503dc3ae8c92eb53fbc1e1f72d96c5533a6487bbbd45de68c6b37e6788c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6aa5182385f1b14c0da5bb5b51e81f83a2ed371d03eba4a512197fde53ef5d54\"" Mar 17 18:39:06.771226 containerd[1503]: time="2025-03-17T18:39:06.771162016Z" level=info msg="StartContainer for \"6aa5182385f1b14c0da5bb5b51e81f83a2ed371d03eba4a512197fde53ef5d54\"" Mar 17 18:39:06.801887 systemd[1]: Started cri-containerd-6aa5182385f1b14c0da5bb5b51e81f83a2ed371d03eba4a512197fde53ef5d54.scope - libcontainer container 6aa5182385f1b14c0da5bb5b51e81f83a2ed371d03eba4a512197fde53ef5d54. Mar 17 18:39:06.835204 containerd[1503]: time="2025-03-17T18:39:06.835147867Z" level=info msg="StartContainer for \"6aa5182385f1b14c0da5bb5b51e81f83a2ed371d03eba4a512197fde53ef5d54\" returns successfully" Mar 17 18:39:07.027055 systemd[1]: Started sshd@13-37.27.32.129:22-61.206.202.179:60830.service - OpenSSH per-connection server daemon (61.206.202.179:60830). Mar 17 18:39:07.434845 kubelet[2014]: E0317 18:39:07.434599 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:07.505673 kubelet[2014]: I0317 18:39:07.505620 2014 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 18:39:07.505673 kubelet[2014]: I0317 18:39:07.505654 2014 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 18:39:07.637055 kubelet[2014]: I0317 18:39:07.637008 2014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hgspb" podStartSLOduration=22.719511999 podStartE2EDuration="31.636993209s" podCreationTimestamp="2025-03-17 18:38:36 +0000 UTC" firstStartedPulling="2025-03-17 18:38:57.835972223 +0000 UTC m=+21.783388887" lastFinishedPulling="2025-03-17 18:39:06.753453434 +0000 UTC m=+30.700870097" observedRunningTime="2025-03-17 18:39:07.63644943 +0000 UTC m=+31.583866103" watchObservedRunningTime="2025-03-17 18:39:07.636993209 +0000 UTC m=+31.584409882" Mar 17 18:39:08.435803 kubelet[2014]: E0317 18:39:08.435759 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:08.855048 sshd[3421]: Invalid user admin from 61.206.202.179 port 60830 Mar 17 18:39:09.436586 kubelet[2014]: E0317 18:39:09.436492 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:09.983304 sshd[3421]: Received disconnect from 61.206.202.179 port 60830:11: disconnected by user [preauth] Mar 17 18:39:09.983304 sshd[3421]: Disconnected from invalid user admin 61.206.202.179 port 60830 [preauth] Mar 17 18:39:09.986117 systemd[1]: sshd@13-37.27.32.129:22-61.206.202.179:60830.service: Deactivated successfully. Mar 17 18:39:10.272784 systemd[1]: Started sshd@14-37.27.32.129:22-61.206.202.179:33094.service - OpenSSH per-connection server daemon (61.206.202.179:33094). Mar 17 18:39:10.437241 kubelet[2014]: E0317 18:39:10.437186 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:11.438333 kubelet[2014]: E0317 18:39:11.438288 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:12.020208 sshd[3434]: Invalid user oracle from 61.206.202.179 port 33094 Mar 17 18:39:12.439145 kubelet[2014]: E0317 18:39:12.438985 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:13.439667 kubelet[2014]: E0317 18:39:13.439608 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:13.461158 sshd[3434]: maximum authentication attempts exceeded for invalid user oracle from 61.206.202.179 port 33094 ssh2 [preauth] Mar 17 18:39:13.461158 sshd[3434]: Disconnecting invalid user oracle 61.206.202.179 port 33094: Too many authentication failures [preauth] Mar 17 18:39:13.463847 systemd[1]: sshd@14-37.27.32.129:22-61.206.202.179:33094.service: Deactivated successfully. Mar 17 18:39:14.026809 systemd[1]: Started sshd@15-37.27.32.129:22-61.206.202.179:33704.service - OpenSSH per-connection server daemon (61.206.202.179:33704). Mar 17 18:39:14.440121 kubelet[2014]: E0317 18:39:14.439951 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:15.440792 kubelet[2014]: E0317 18:39:15.440732 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:15.870657 sshd[3463]: Invalid user oracle from 61.206.202.179 port 33704 Mar 17 18:39:16.409623 kubelet[2014]: E0317 18:39:16.409562 2014 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:16.441239 kubelet[2014]: E0317 18:39:16.441210 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:17.057153 systemd[1]: Created slice kubepods-besteffort-poda5074431_a47d_4dd9_8cde_f95e5a0a520a.slice - libcontainer container kubepods-besteffort-poda5074431_a47d_4dd9_8cde_f95e5a0a520a.slice. Mar 17 18:39:17.168191 kubelet[2014]: I0317 18:39:17.168138 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a5074431-a47d-4dd9-8cde-f95e5a0a520a-data\") pod \"nfs-server-provisioner-0\" (UID: \"a5074431-a47d-4dd9-8cde-f95e5a0a520a\") " pod="default/nfs-server-provisioner-0" Mar 17 18:39:17.168191 kubelet[2014]: I0317 18:39:17.168185 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8psff\" (UniqueName: \"kubernetes.io/projected/a5074431-a47d-4dd9-8cde-f95e5a0a520a-kube-api-access-8psff\") pod \"nfs-server-provisioner-0\" (UID: \"a5074431-a47d-4dd9-8cde-f95e5a0a520a\") " pod="default/nfs-server-provisioner-0" Mar 17 18:39:17.315213 sshd[3463]: maximum authentication attempts exceeded for invalid user oracle from 61.206.202.179 port 33704 ssh2 [preauth] Mar 17 18:39:17.315213 sshd[3463]: Disconnecting invalid user oracle 61.206.202.179 port 33704: Too many authentication failures [preauth] Mar 17 18:39:17.318303 systemd[1]: sshd@15-37.27.32.129:22-61.206.202.179:33704.service: Deactivated successfully. Mar 17 18:39:17.360402 containerd[1503]: time="2025-03-17T18:39:17.360339183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a5074431-a47d-4dd9-8cde-f95e5a0a520a,Namespace:default,Attempt:0,}" Mar 17 18:39:17.441474 kubelet[2014]: E0317 18:39:17.441425 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:17.494766 systemd-networkd[1384]: cali60e51b789ff: Link UP Mar 17 18:39:17.495065 systemd-networkd[1384]: cali60e51b789ff: Gained carrier Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.414 [INFO][3475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a5074431-a47d-4dd9-8cde-f95e5a0a520a 1808 0 2025-03-17 18:39:17 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.414 [INFO][3475] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.448 [INFO][3487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" HandleID="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.457 [INFO][3487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" HandleID="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285890), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 18:39:17.448309464 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.457 [INFO][3487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.457 [INFO][3487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.457 [INFO][3487] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.460 [INFO][3487] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.466 [INFO][3487] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.471 [INFO][3487] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.473 [INFO][3487] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.476 [INFO][3487] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.476 [INFO][3487] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.478 [INFO][3487] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181 Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.482 [INFO][3487] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.488 [INFO][3487] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.488 [INFO][3487] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" host="10.0.0.4" Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.488 [INFO][3487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:39:17.509200 containerd[1503]: 2025-03-17 18:39:17.488 [INFO][3487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" HandleID="k8s-pod-network.f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:39:17.510145 containerd[1503]: 2025-03-17 18:39:17.491 [INFO][3475] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a5074431-a47d-4dd9-8cde-f95e5a0a520a", ResourceVersion:"1808", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 39, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:39:17.510145 containerd[1503]: 2025-03-17 18:39:17.491 [INFO][3475] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:39:17.510145 containerd[1503]: 2025-03-17 18:39:17.491 [INFO][3475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:39:17.510145 containerd[1503]: 2025-03-17 18:39:17.496 [INFO][3475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:39:17.510356 containerd[1503]: 2025-03-17 18:39:17.496 [INFO][3475] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a5074431-a47d-4dd9-8cde-f95e5a0a520a", ResourceVersion:"1808", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 39, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"9a:54:ea:d9:b3:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:39:17.510356 containerd[1503]: 2025-03-17 18:39:17.506 [INFO][3475] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Mar 17 18:39:17.533150 containerd[1503]: time="2025-03-17T18:39:17.533038952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:17.533150 containerd[1503]: time="2025-03-17T18:39:17.533100180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:17.533150 containerd[1503]: time="2025-03-17T18:39:17.533113856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:17.533309 containerd[1503]: time="2025-03-17T18:39:17.533185215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:17.558842 systemd[1]: Started cri-containerd-f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181.scope - libcontainer container f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181. Mar 17 18:39:17.595627 containerd[1503]: time="2025-03-17T18:39:17.595542473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a5074431-a47d-4dd9-8cde-f95e5a0a520a,Namespace:default,Attempt:0,} returns sandbox id \"f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181\"" Mar 17 18:39:17.597067 containerd[1503]: time="2025-03-17T18:39:17.596950714Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 18:39:17.885005 systemd[1]: Started sshd@16-37.27.32.129:22-61.206.202.179:34304.service - OpenSSH per-connection server daemon (61.206.202.179:34304). Mar 17 18:39:18.280757 systemd[1]: run-containerd-runc-k8s.io-f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181-runc.03w3MV.mount: Deactivated successfully. Mar 17 18:39:18.442730 kubelet[2014]: E0317 18:39:18.442151 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:19.186528 systemd-networkd[1384]: cali60e51b789ff: Gained IPv6LL Mar 17 18:39:19.408740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2148262422.mount: Deactivated successfully. Mar 17 18:39:19.443364 kubelet[2014]: E0317 18:39:19.443272 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:19.747156 sshd[3548]: Invalid user oracle from 61.206.202.179 port 34304 Mar 17 18:39:20.314684 sshd[3548]: Received disconnect from 61.206.202.179 port 34304:11: disconnected by user [preauth] Mar 17 18:39:20.314942 sshd[3548]: Disconnected from invalid user oracle 61.206.202.179 port 34304 [preauth] Mar 17 18:39:20.321183 systemd[1]: sshd@16-37.27.32.129:22-61.206.202.179:34304.service: Deactivated successfully. Mar 17 18:39:20.443979 kubelet[2014]: E0317 18:39:20.443913 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:20.601834 systemd[1]: Started sshd@17-37.27.32.129:22-61.206.202.179:34706.service - OpenSSH per-connection server daemon (61.206.202.179:34706). Mar 17 18:39:20.944518 containerd[1503]: time="2025-03-17T18:39:20.944387799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:20.945600 containerd[1503]: time="2025-03-17T18:39:20.945551635Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039476" Mar 17 18:39:20.946566 containerd[1503]: time="2025-03-17T18:39:20.946520153Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:20.956442 containerd[1503]: time="2025-03-17T18:39:20.956401165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:20.958015 containerd[1503]: time="2025-03-17T18:39:20.957380866Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.360400093s" Mar 17 18:39:20.958015 containerd[1503]: time="2025-03-17T18:39:20.957407427Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 18:39:20.959535 containerd[1503]: time="2025-03-17T18:39:20.959498110Z" level=info msg="CreateContainer within sandbox \"f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 18:39:20.998234 containerd[1503]: time="2025-03-17T18:39:20.998194333Z" level=info msg="CreateContainer within sandbox \"f12cd27679a9f6d880536e3da15f58b163b5d7f8d7e3fb5f52fd0469bf9b2181\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2dea13dd439b2ad31680b9a84a3f9d2f6ba5f104d00424e4b0bcf9b944219c19\"" Mar 17 18:39:20.998724 containerd[1503]: time="2025-03-17T18:39:20.998681964Z" level=info msg="StartContainer for \"2dea13dd439b2ad31680b9a84a3f9d2f6ba5f104d00424e4b0bcf9b944219c19\"" Mar 17 18:39:21.057835 systemd[1]: Started cri-containerd-2dea13dd439b2ad31680b9a84a3f9d2f6ba5f104d00424e4b0bcf9b944219c19.scope - libcontainer container 2dea13dd439b2ad31680b9a84a3f9d2f6ba5f104d00424e4b0bcf9b944219c19. Mar 17 18:39:21.080156 containerd[1503]: time="2025-03-17T18:39:21.080047115Z" level=info msg="StartContainer for \"2dea13dd439b2ad31680b9a84a3f9d2f6ba5f104d00424e4b0bcf9b944219c19\" returns successfully" Mar 17 18:39:21.444861 kubelet[2014]: E0317 18:39:21.444808 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:21.667119 kubelet[2014]: I0317 18:39:21.667058 2014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.305614012 podStartE2EDuration="4.667041168s" podCreationTimestamp="2025-03-17 18:39:17 +0000 UTC" firstStartedPulling="2025-03-17 18:39:17.596671494 +0000 UTC m=+41.544088158" lastFinishedPulling="2025-03-17 18:39:20.958098651 +0000 UTC m=+44.905515314" observedRunningTime="2025-03-17 18:39:21.665735631 +0000 UTC m=+45.613152304" watchObservedRunningTime="2025-03-17 18:39:21.667041168 +0000 UTC m=+45.614457841" Mar 17 18:39:22.335911 sshd[3567]: Invalid user usuario from 61.206.202.179 port 34706 Mar 17 18:39:22.445139 kubelet[2014]: E0317 18:39:22.445079 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:23.446316 kubelet[2014]: E0317 18:39:23.446188 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:23.762989 sshd[3567]: maximum authentication attempts exceeded for invalid user usuario from 61.206.202.179 port 34706 ssh2 [preauth] Mar 17 18:39:23.762989 sshd[3567]: Disconnecting invalid user usuario 61.206.202.179 port 34706: Too many authentication failures [preauth] Mar 17 18:39:23.765806 systemd[1]: sshd@17-37.27.32.129:22-61.206.202.179:34706.service: Deactivated successfully. Mar 17 18:39:24.336115 systemd[1]: Started sshd@18-37.27.32.129:22-61.206.202.179:35320.service - OpenSSH per-connection server daemon (61.206.202.179:35320). Mar 17 18:39:24.446664 kubelet[2014]: E0317 18:39:24.446618 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:25.447794 kubelet[2014]: E0317 18:39:25.447672 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:26.353442 sshd[3670]: Invalid user usuario from 61.206.202.179 port 35320 Mar 17 18:39:26.448122 kubelet[2014]: E0317 18:39:26.448027 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:27.448368 kubelet[2014]: E0317 18:39:27.448319 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:27.781822 sshd[3670]: maximum authentication attempts exceeded for invalid user usuario from 61.206.202.179 port 35320 ssh2 [preauth] Mar 17 18:39:27.781822 sshd[3670]: Disconnecting invalid user usuario 61.206.202.179 port 35320: Too many authentication failures [preauth] Mar 17 18:39:27.784599 systemd[1]: sshd@18-37.27.32.129:22-61.206.202.179:35320.service: Deactivated successfully. Mar 17 18:39:28.351890 systemd[1]: Started sshd@19-37.27.32.129:22-61.206.202.179:35958.service - OpenSSH per-connection server daemon (61.206.202.179:35958). Mar 17 18:39:28.448630 kubelet[2014]: E0317 18:39:28.448592 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:29.449217 kubelet[2014]: E0317 18:39:29.449156 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:30.217000 sshd[3678]: Invalid user usuario from 61.206.202.179 port 35958 Mar 17 18:39:30.449926 kubelet[2014]: E0317 18:39:30.449858 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:30.782522 sshd[3678]: Received disconnect from 61.206.202.179 port 35958:11: disconnected by user [preauth] Mar 17 18:39:30.782522 sshd[3678]: Disconnected from invalid user usuario 61.206.202.179 port 35958 [preauth] Mar 17 18:39:30.785529 systemd[1]: sshd@19-37.27.32.129:22-61.206.202.179:35958.service: Deactivated successfully. Mar 17 18:39:31.071888 systemd[1]: Started sshd@20-37.27.32.129:22-61.206.202.179:36458.service - OpenSSH per-connection server daemon (61.206.202.179:36458). Mar 17 18:39:31.091763 systemd[1]: Created slice kubepods-besteffort-podb820426c_9789_49da_9b90_99dff25e2966.slice - libcontainer container kubepods-besteffort-podb820426c_9789_49da_9b90_99dff25e2966.slice. Mar 17 18:39:31.238577 kubelet[2014]: I0317 18:39:31.238445 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53026084-b39a-493f-8dd8-7fe65f9d0d24\" (UniqueName: \"kubernetes.io/nfs/b820426c-9789-49da-9b90-99dff25e2966-pvc-53026084-b39a-493f-8dd8-7fe65f9d0d24\") pod \"test-pod-1\" (UID: \"b820426c-9789-49da-9b90-99dff25e2966\") " pod="default/test-pod-1" Mar 17 18:39:31.238577 kubelet[2014]: I0317 18:39:31.238503 2014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4zf\" (UniqueName: \"kubernetes.io/projected/b820426c-9789-49da-9b90-99dff25e2966-kube-api-access-tx4zf\") pod \"test-pod-1\" (UID: \"b820426c-9789-49da-9b90-99dff25e2966\") " pod="default/test-pod-1" Mar 17 18:39:31.370753 kernel: FS-Cache: Loaded Mar 17 18:39:31.431954 kernel: RPC: Registered named UNIX socket transport module. Mar 17 18:39:31.432081 kernel: RPC: Registered udp transport module. Mar 17 18:39:31.432120 kernel: RPC: Registered tcp transport module. Mar 17 18:39:31.433045 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 18:39:31.433813 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 18:39:31.450574 kubelet[2014]: E0317 18:39:31.450541 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:31.688101 kernel: NFS: Registering the id_resolver key type Mar 17 18:39:31.688328 kernel: Key type id_resolver registered Mar 17 18:39:31.697002 kernel: Key type id_legacy registered Mar 17 18:39:31.722951 nfsidmap[3700]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 18:39:31.726631 nfsidmap[3702]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 18:39:31.996044 containerd[1503]: time="2025-03-17T18:39:31.995934764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b820426c-9789-49da-9b90-99dff25e2966,Namespace:default,Attempt:0,}" Mar 17 18:39:32.120995 systemd-networkd[1384]: cali5ec59c6bf6e: Link UP Mar 17 18:39:32.121875 systemd-networkd[1384]: cali5ec59c6bf6e: Gained carrier Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.049 [INFO][3703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default b820426c-9789-49da-9b90-99dff25e2966 1871 0 2025-03-17 18:39:19 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.050 [INFO][3703] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.078 [INFO][3715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" HandleID="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Workload="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.087 [INFO][3715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" HandleID="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a990), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-03-17 18:39:32.078614562 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.087 [INFO][3715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.087 [INFO][3715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.087 [INFO][3715] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.090 [INFO][3715] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.094 [INFO][3715] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.099 [INFO][3715] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.101 [INFO][3715] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.103 [INFO][3715] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.103 [INFO][3715] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.105 [INFO][3715] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87 Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.109 [INFO][3715] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.114 [INFO][3715] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.114 [INFO][3715] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" host="10.0.0.4" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.115 [INFO][3715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.115 [INFO][3715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" HandleID="k8s-pod-network.223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Workload="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.117 [INFO][3703] cni-plugin/k8s.go 386: Populated endpoint ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b820426c-9789-49da-9b90-99dff25e2966", ResourceVersion:"1871", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:39:32.133193 containerd[1503]: 2025-03-17 18:39:32.117 [INFO][3703] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 18:39:32.133941 containerd[1503]: 2025-03-17 18:39:32.117 [INFO][3703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 18:39:32.133941 containerd[1503]: 2025-03-17 18:39:32.121 [INFO][3703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 18:39:32.133941 containerd[1503]: 2025-03-17 18:39:32.122 [INFO][3703] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b820426c-9789-49da-9b90-99dff25e2966", ResourceVersion:"1871", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"c6:3c:7a:e2:c6:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:39:32.133941 containerd[1503]: 2025-03-17 18:39:32.129 [INFO][3703] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Mar 17 18:39:32.167806 containerd[1503]: time="2025-03-17T18:39:32.167734038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:32.167983 containerd[1503]: time="2025-03-17T18:39:32.167783132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:32.167983 containerd[1503]: time="2025-03-17T18:39:32.167795657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:32.167983 containerd[1503]: time="2025-03-17T18:39:32.167857836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:32.190838 systemd[1]: Started cri-containerd-223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87.scope - libcontainer container 223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87. Mar 17 18:39:32.225610 containerd[1503]: time="2025-03-17T18:39:32.225552778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b820426c-9789-49da-9b90-99dff25e2966,Namespace:default,Attempt:0,} returns sandbox id \"223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87\"" Mar 17 18:39:32.235878 containerd[1503]: time="2025-03-17T18:39:32.235852093Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:39:32.451092 kubelet[2014]: E0317 18:39:32.450943 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:32.778436 containerd[1503]: time="2025-03-17T18:39:32.778308295Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:39:32.779364 containerd[1503]: time="2025-03-17T18:39:32.779307982Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 18:39:32.783193 containerd[1503]: time="2025-03-17T18:39:32.783141896Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 547.146168ms" Mar 17 18:39:32.783193 containerd[1503]: time="2025-03-17T18:39:32.783175301Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 18:39:32.785653 containerd[1503]: time="2025-03-17T18:39:32.785358017Z" level=info msg="CreateContainer within sandbox \"223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 18:39:32.813465 containerd[1503]: time="2025-03-17T18:39:32.813409110Z" level=info msg="CreateContainer within sandbox \"223f17ae8e3783523de1ac0405e4b04b30328c54e7852115be091d490d179d87\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7f98189ce9ef29a9b997a526365a112ad521a329265b922cdff5a32ba8b6b9b9\"" Mar 17 18:39:32.814179 containerd[1503]: time="2025-03-17T18:39:32.814093252Z" level=info msg="StartContainer for \"7f98189ce9ef29a9b997a526365a112ad521a329265b922cdff5a32ba8b6b9b9\"" Mar 17 18:39:32.844831 systemd[1]: Started cri-containerd-7f98189ce9ef29a9b997a526365a112ad521a329265b922cdff5a32ba8b6b9b9.scope - libcontainer container 7f98189ce9ef29a9b997a526365a112ad521a329265b922cdff5a32ba8b6b9b9. Mar 17 18:39:32.874064 containerd[1503]: time="2025-03-17T18:39:32.873923851Z" level=info msg="StartContainer for \"7f98189ce9ef29a9b997a526365a112ad521a329265b922cdff5a32ba8b6b9b9\" returns successfully" Mar 17 18:39:33.173265 sshd[3683]: Invalid user test from 61.206.202.179 port 36458 Mar 17 18:39:33.451331 kubelet[2014]: E0317 18:39:33.451194 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:33.458304 systemd-networkd[1384]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 18:39:33.690391 kubelet[2014]: I0317 18:39:33.690299 2014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.141939141 podStartE2EDuration="14.690274951s" podCreationTimestamp="2025-03-17 18:39:19 +0000 UTC" firstStartedPulling="2025-03-17 18:39:32.235614657 +0000 UTC m=+56.183031320" lastFinishedPulling="2025-03-17 18:39:32.783950457 +0000 UTC m=+56.731367130" observedRunningTime="2025-03-17 18:39:33.6901072 +0000 UTC m=+57.637523863" watchObservedRunningTime="2025-03-17 18:39:33.690274951 +0000 UTC m=+57.637691645" Mar 17 18:39:34.452117 kubelet[2014]: E0317 18:39:34.452054 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:34.598020 sshd[3683]: maximum authentication attempts exceeded for invalid user test from 61.206.202.179 port 36458 ssh2 [preauth] Mar 17 18:39:34.598020 sshd[3683]: Disconnecting invalid user test 61.206.202.179 port 36458: Too many authentication failures [preauth] Mar 17 18:39:34.601088 systemd[1]: sshd@20-37.27.32.129:22-61.206.202.179:36458.service: Deactivated successfully. Mar 17 18:39:35.174405 systemd[1]: Started sshd@21-37.27.32.129:22-61.206.202.179:37158.service - OpenSSH per-connection server daemon (61.206.202.179:37158). Mar 17 18:39:35.453128 kubelet[2014]: E0317 18:39:35.452936 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:36.409791 kubelet[2014]: E0317 18:39:36.409735 2014 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:36.427955 containerd[1503]: time="2025-03-17T18:39:36.427906186Z" level=info msg="StopPodSandbox for \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\"" Mar 17 18:39:36.428348 containerd[1503]: time="2025-03-17T18:39:36.428018150Z" level=info msg="TearDown network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" successfully" Mar 17 18:39:36.428348 containerd[1503]: time="2025-03-17T18:39:36.428029802Z" level=info msg="StopPodSandbox for \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" returns successfully" Mar 17 18:39:36.431574 containerd[1503]: time="2025-03-17T18:39:36.431526950Z" level=info msg="RemovePodSandbox for \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\"" Mar 17 18:39:36.438661 containerd[1503]: time="2025-03-17T18:39:36.438612199Z" level=info msg="Forcibly stopping sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\"" Mar 17 18:39:36.454109 kubelet[2014]: E0317 18:39:36.454065 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:36.454502 containerd[1503]: time="2025-03-17T18:39:36.438742670Z" level=info msg="TearDown network for sandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" successfully" Mar 17 18:39:36.466868 containerd[1503]: time="2025-03-17T18:39:36.466690114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.466868 containerd[1503]: time="2025-03-17T18:39:36.466766510Z" level=info msg="RemovePodSandbox \"4c8dd1d042f07263e5fe0142bbadb93db3305724598d805024966a7336063a1e\" returns successfully" Mar 17 18:39:36.467564 containerd[1503]: time="2025-03-17T18:39:36.467192696Z" level=info msg="StopPodSandbox for \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\"" Mar 17 18:39:36.467564 containerd[1503]: time="2025-03-17T18:39:36.467313598Z" level=info msg="TearDown network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\" successfully" Mar 17 18:39:36.467564 containerd[1503]: time="2025-03-17T18:39:36.467325942Z" level=info msg="StopPodSandbox for \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\" returns successfully" Mar 17 18:39:36.467647 containerd[1503]: time="2025-03-17T18:39:36.467625125Z" level=info msg="RemovePodSandbox for \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\"" Mar 17 18:39:36.467761 containerd[1503]: time="2025-03-17T18:39:36.467644221Z" level=info msg="Forcibly stopping sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\"" Mar 17 18:39:36.467761 containerd[1503]: time="2025-03-17T18:39:36.467736358Z" level=info msg="TearDown network for sandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\" successfully" Mar 17 18:39:36.470111 containerd[1503]: time="2025-03-17T18:39:36.470077892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.470161 containerd[1503]: time="2025-03-17T18:39:36.470118851Z" level=info msg="RemovePodSandbox \"8e60a29598ba745b43274e5eeefb411ea6b8d4884e75545ecbb19a501e42984d\" returns successfully" Mar 17 18:39:36.470369 containerd[1503]: time="2025-03-17T18:39:36.470344203Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:39:36.470450 containerd[1503]: time="2025-03-17T18:39:36.470428844Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:39:36.470450 containerd[1503]: time="2025-03-17T18:39:36.470443533Z" level=info msg="StopPodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:39:36.470761 containerd[1503]: time="2025-03-17T18:39:36.470657362Z" level=info msg="RemovePodSandbox for \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:39:36.470761 containerd[1503]: time="2025-03-17T18:39:36.470675156Z" level=info msg="Forcibly stopping sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\"" Mar 17 18:39:36.480932 containerd[1503]: time="2025-03-17T18:39:36.480874651Z" level=info msg="TearDown network for sandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" successfully" Mar 17 18:39:36.483213 containerd[1503]: time="2025-03-17T18:39:36.483184574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.483283 containerd[1503]: time="2025-03-17T18:39:36.483219171Z" level=info msg="RemovePodSandbox \"9e3be08c15ded320c907355b12bcbf9d602b706545dd3041649d3002f168223b\" returns successfully" Mar 17 18:39:36.483674 containerd[1503]: time="2025-03-17T18:39:36.483585984Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:39:36.483674 containerd[1503]: time="2025-03-17T18:39:36.483659284Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:39:36.483674 containerd[1503]: time="2025-03-17T18:39:36.483668261Z" level=info msg="StopPodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:39:36.484005 containerd[1503]: time="2025-03-17T18:39:36.483974707Z" level=info msg="RemovePodSandbox for \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:39:36.484005 containerd[1503]: time="2025-03-17T18:39:36.484001619Z" level=info msg="Forcibly stopping sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\"" Mar 17 18:39:36.484111 containerd[1503]: time="2025-03-17T18:39:36.484073487Z" level=info msg="TearDown network for sandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" successfully" Mar 17 18:39:36.486694 containerd[1503]: time="2025-03-17T18:39:36.486661123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.486786 containerd[1503]: time="2025-03-17T18:39:36.486698585Z" level=info msg="RemovePodSandbox \"343f303bc89e83476311f0d3ca2915afd6a67c8142673cc48ddb3d8cfe9c4777\" returns successfully" Mar 17 18:39:36.487102 containerd[1503]: time="2025-03-17T18:39:36.486948243Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:39:36.487102 containerd[1503]: time="2025-03-17T18:39:36.487041230Z" level=info msg="TearDown network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" successfully" Mar 17 18:39:36.487102 containerd[1503]: time="2025-03-17T18:39:36.487055919Z" level=info msg="StopPodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" returns successfully" Mar 17 18:39:36.487438 containerd[1503]: time="2025-03-17T18:39:36.487386742Z" level=info msg="RemovePodSandbox for \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:39:36.487438 containerd[1503]: time="2025-03-17T18:39:36.487409165Z" level=info msg="Forcibly stopping sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\"" Mar 17 18:39:36.487562 containerd[1503]: time="2025-03-17T18:39:36.487488717Z" level=info msg="TearDown network for sandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" successfully" Mar 17 18:39:36.489934 containerd[1503]: time="2025-03-17T18:39:36.489906869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.489995 containerd[1503]: time="2025-03-17T18:39:36.489940213Z" level=info msg="RemovePodSandbox \"d09d1b34d19be163cfba6eba750436d2039139e3e4313491d4413bbde05693cb\" returns successfully" Mar 17 18:39:36.491364 containerd[1503]: time="2025-03-17T18:39:36.490606760Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" Mar 17 18:39:36.491364 containerd[1503]: time="2025-03-17T18:39:36.490783007Z" level=info msg="TearDown network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" successfully" Mar 17 18:39:36.491364 containerd[1503]: time="2025-03-17T18:39:36.490795010Z" level=info msg="StopPodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" returns successfully" Mar 17 18:39:36.492104 containerd[1503]: time="2025-03-17T18:39:36.492082305Z" level=info msg="RemovePodSandbox for \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" Mar 17 18:39:36.492211 containerd[1503]: time="2025-03-17T18:39:36.492196775Z" level=info msg="Forcibly stopping sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\"" Mar 17 18:39:36.492384 containerd[1503]: time="2025-03-17T18:39:36.492325321Z" level=info msg="TearDown network for sandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" successfully" Mar 17 18:39:36.497339 containerd[1503]: time="2025-03-17T18:39:36.497240394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.497339 containerd[1503]: time="2025-03-17T18:39:36.497271142Z" level=info msg="RemovePodSandbox \"ff7d02d951ab58d2447c1df8cc2d5444b1589913e74193803142824c122f0538\" returns successfully" Mar 17 18:39:36.499333 containerd[1503]: time="2025-03-17T18:39:36.499316070Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\"" Mar 17 18:39:36.499737 containerd[1503]: time="2025-03-17T18:39:36.499695987Z" level=info msg="TearDown network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" successfully" Mar 17 18:39:36.499803 containerd[1503]: time="2025-03-17T18:39:36.499789797Z" level=info msg="StopPodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" returns successfully" Mar 17 18:39:36.500506 containerd[1503]: time="2025-03-17T18:39:36.500488535Z" level=info msg="RemovePodSandbox for \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\"" Mar 17 18:39:36.500808 containerd[1503]: time="2025-03-17T18:39:36.500584068Z" level=info msg="Forcibly stopping sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\"" Mar 17 18:39:36.500808 containerd[1503]: time="2025-03-17T18:39:36.500649934Z" level=info msg="TearDown network for sandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" successfully" Mar 17 18:39:36.503457 containerd[1503]: time="2025-03-17T18:39:36.503426623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.503457 containerd[1503]: time="2025-03-17T18:39:36.503456940Z" level=info msg="RemovePodSandbox \"8c70d090ed90b9f5b8aa979c3228001451c99992263bf7a9b3f7e9510b50f274\" returns successfully" Mar 17 18:39:36.503749 containerd[1503]: time="2025-03-17T18:39:36.503678104Z" level=info msg="StopPodSandbox for \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\"" Mar 17 18:39:36.503798 containerd[1503]: time="2025-03-17T18:39:36.503781302Z" level=info msg="TearDown network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" successfully" Mar 17 18:39:36.503798 containerd[1503]: time="2025-03-17T18:39:36.503795267Z" level=info msg="StopPodSandbox for \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" returns successfully" Mar 17 18:39:36.504040 containerd[1503]: time="2025-03-17T18:39:36.504012574Z" level=info msg="RemovePodSandbox for \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\"" Mar 17 18:39:36.504040 containerd[1503]: time="2025-03-17T18:39:36.504036360Z" level=info msg="Forcibly stopping sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\"" Mar 17 18:39:36.504143 containerd[1503]: time="2025-03-17T18:39:36.504105332Z" level=info msg="TearDown network for sandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" successfully" Mar 17 18:39:36.506632 containerd[1503]: time="2025-03-17T18:39:36.506601412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.506778 containerd[1503]: time="2025-03-17T18:39:36.506636128Z" level=info msg="RemovePodSandbox \"a085fc9cd92d9225b19c5b1def1ace9502efd7417d9a1ed1a9543b2e925f7f16\" returns successfully" Mar 17 18:39:36.506942 containerd[1503]: time="2025-03-17T18:39:36.506915724Z" level=info msg="StopPodSandbox for \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\"" Mar 17 18:39:36.507029 containerd[1503]: time="2025-03-17T18:39:36.507003913Z" level=info msg="TearDown network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\" successfully" Mar 17 18:39:36.507029 containerd[1503]: time="2025-03-17T18:39:36.507020174Z" level=info msg="StopPodSandbox for \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\" returns successfully" Mar 17 18:39:36.507348 containerd[1503]: time="2025-03-17T18:39:36.507276325Z" level=info msg="RemovePodSandbox for \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\"" Mar 17 18:39:36.507348 containerd[1503]: time="2025-03-17T18:39:36.507298197Z" level=info msg="Forcibly stopping sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\"" Mar 17 18:39:36.507432 containerd[1503]: time="2025-03-17T18:39:36.507407045Z" level=info msg="TearDown network for sandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\" successfully" Mar 17 18:39:36.509808 containerd[1503]: time="2025-03-17T18:39:36.509779800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 18:39:36.509901 containerd[1503]: time="2025-03-17T18:39:36.509813384Z" level=info msg="RemovePodSandbox \"27e53ab103ade434dd79e905625db685ecbf50f1c3876043135003df28df18d1\" returns successfully" Mar 17 18:39:37.076687 sshd[3831]: Invalid user test from 61.206.202.179 port 37158 Mar 17 18:39:37.454326 kubelet[2014]: E0317 18:39:37.454197 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:38.455381 kubelet[2014]: E0317 18:39:38.455288 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:38.495782 sshd[3831]: maximum authentication attempts exceeded for invalid user test from 61.206.202.179 port 37158 ssh2 [preauth] Mar 17 18:39:38.495782 sshd[3831]: Disconnecting invalid user test 61.206.202.179 port 37158: Too many authentication failures [preauth] Mar 17 18:39:38.500152 systemd[1]: sshd@21-37.27.32.129:22-61.206.202.179:37158.service: Deactivated successfully. Mar 17 18:39:39.069931 systemd[1]: Started sshd@22-37.27.32.129:22-61.206.202.179:37834.service - OpenSSH per-connection server daemon (61.206.202.179:37834). Mar 17 18:39:39.456584 kubelet[2014]: E0317 18:39:39.456373 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:40.457135 kubelet[2014]: E0317 18:39:40.457073 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:40.891793 sshd[3848]: Invalid user test from 61.206.202.179 port 37834 Mar 17 18:39:41.455223 sshd[3848]: Received disconnect from 61.206.202.179 port 37834:11: disconnected by user [preauth] Mar 17 18:39:41.455223 sshd[3848]: Disconnected from invalid user test 61.206.202.179 port 37834 [preauth] Mar 17 18:39:41.457973 kubelet[2014]: E0317 18:39:41.457936 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:41.458008 systemd[1]: sshd@22-37.27.32.129:22-61.206.202.179:37834.service: Deactivated successfully. Mar 17 18:39:41.717494 update_engine[1485]: I20250317 18:39:41.717357 1485 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 18:39:41.717494 update_engine[1485]: I20250317 18:39:41.717408 1485 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 18:39:41.717930 update_engine[1485]: I20250317 18:39:41.717643 1485 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 18:39:41.718699 update_engine[1485]: I20250317 18:39:41.718638 1485 omaha_request_params.cc:62] Current group set to stable Mar 17 18:39:41.720824 update_engine[1485]: I20250317 18:39:41.720726 1485 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 18:39:41.720824 update_engine[1485]: I20250317 18:39:41.720746 1485 update_attempter.cc:643] Scheduling an action processor start. Mar 17 18:39:41.720824 update_engine[1485]: I20250317 18:39:41.720763 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 18:39:41.720824 update_engine[1485]: I20250317 18:39:41.720795 1485 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 18:39:41.720932 update_engine[1485]: I20250317 18:39:41.720865 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 18:39:41.720932 update_engine[1485]: I20250317 18:39:41.720876 1485 omaha_request_action.cc:272] Request: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: Mar 17 18:39:41.720932 update_engine[1485]: I20250317 18:39:41.720885 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:39:41.721373 locksmithd[1507]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 18:39:41.725284 update_engine[1485]: I20250317 18:39:41.725241 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:39:41.725603 update_engine[1485]: I20250317 18:39:41.725553 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:39:41.725894 update_engine[1485]: E20250317 18:39:41.725862 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:39:41.725937 update_engine[1485]: I20250317 18:39:41.725920 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 18:39:41.742973 systemd[1]: Started sshd@23-37.27.32.129:22-61.206.202.179:38340.service - OpenSSH per-connection server daemon (61.206.202.179:38340). Mar 17 18:39:42.459084 kubelet[2014]: E0317 18:39:42.459015 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:42.941837 systemd[1]: run-containerd-runc-k8s.io-0c3fa42cdbdee956a95c62cfe9535e5eb77ced50cb0f59dfba4f56a3db3b8b7a-runc.fjiV4C.mount: Deactivated successfully. Mar 17 18:39:43.459946 kubelet[2014]: E0317 18:39:43.459849 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:43.858221 sshd[3853]: Invalid user user from 61.206.202.179 port 38340 Mar 17 18:39:44.461123 kubelet[2014]: E0317 18:39:44.461032 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:45.284770 sshd[3853]: maximum authentication attempts exceeded for invalid user user from 61.206.202.179 port 38340 ssh2 [preauth] Mar 17 18:39:45.284770 sshd[3853]: Disconnecting invalid user user 61.206.202.179 port 38340: Too many authentication failures [preauth] Mar 17 18:39:45.286959 systemd[1]: sshd@23-37.27.32.129:22-61.206.202.179:38340.service: Deactivated successfully. Mar 17 18:39:45.462311 kubelet[2014]: E0317 18:39:45.462228 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:45.856109 systemd[1]: Started sshd@24-37.27.32.129:22-61.206.202.179:39030.service - OpenSSH per-connection server daemon (61.206.202.179:39030). Mar 17 18:39:46.462648 kubelet[2014]: E0317 18:39:46.462569 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:47.463590 kubelet[2014]: E0317 18:39:47.463542 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:47.668016 sshd[3883]: Invalid user user from 61.206.202.179 port 39030 Mar 17 18:39:48.464158 kubelet[2014]: E0317 18:39:48.464085 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:49.095055 sshd[3883]: maximum authentication attempts exceeded for invalid user user from 61.206.202.179 port 39030 ssh2 [preauth] Mar 17 18:39:49.095055 sshd[3883]: Disconnecting invalid user user 61.206.202.179 port 39030: Too many authentication failures [preauth] Mar 17 18:39:49.098054 systemd[1]: sshd@24-37.27.32.129:22-61.206.202.179:39030.service: Deactivated successfully. Mar 17 18:39:49.464914 kubelet[2014]: E0317 18:39:49.464768 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:49.665067 systemd[1]: Started sshd@25-37.27.32.129:22-61.206.202.179:39732.service - OpenSSH per-connection server daemon (61.206.202.179:39732). Mar 17 18:39:50.465587 kubelet[2014]: E0317 18:39:50.465478 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:51.419557 kubelet[2014]: E0317 18:39:51.419465 2014 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:58546->10.0.0.2:2379: read: connection timed out" Mar 17 18:39:51.459992 sshd[3888]: Invalid user user from 61.206.202.179 port 39732 Mar 17 18:39:51.466107 kubelet[2014]: E0317 18:39:51.466031 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:51.718192 update_engine[1485]: I20250317 18:39:51.717954 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:39:51.718867 update_engine[1485]: I20250317 18:39:51.718354 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:39:51.718867 update_engine[1485]: I20250317 18:39:51.718797 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:39:51.719310 update_engine[1485]: E20250317 18:39:51.719182 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:39:51.719310 update_engine[1485]: I20250317 18:39:51.719262 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 18:39:52.466817 kubelet[2014]: E0317 18:39:52.466700 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:52.599613 sshd[3888]: Received disconnect from 61.206.202.179 port 39732:11: disconnected by user [preauth] Mar 17 18:39:52.599613 sshd[3888]: Disconnected from invalid user user 61.206.202.179 port 39732 [preauth] Mar 17 18:39:52.602601 systemd[1]: sshd@25-37.27.32.129:22-61.206.202.179:39732.service: Deactivated successfully. Mar 17 18:39:52.901121 systemd[1]: Started sshd@26-37.27.32.129:22-61.206.202.179:40278.service - OpenSSH per-connection server daemon (61.206.202.179:40278). Mar 17 18:39:53.467208 kubelet[2014]: E0317 18:39:53.467105 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:54.467527 kubelet[2014]: E0317 18:39:54.467428 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:55.468131 kubelet[2014]: E0317 18:39:55.467999 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:55.534152 sshd[3895]: Invalid user ftpuser from 61.206.202.179 port 40278 Mar 17 18:39:56.409892 kubelet[2014]: E0317 18:39:56.409797 2014 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:56.468924 kubelet[2014]: E0317 18:39:56.468832 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:56.969641 sshd[3895]: maximum authentication attempts exceeded for invalid user ftpuser from 61.206.202.179 port 40278 ssh2 [preauth] Mar 17 18:39:56.969641 sshd[3895]: Disconnecting invalid user ftpuser 61.206.202.179 port 40278: Too many authentication failures [preauth] Mar 17 18:39:56.973991 systemd[1]: sshd@26-37.27.32.129:22-61.206.202.179:40278.service: Deactivated successfully. Mar 17 18:39:57.469671 kubelet[2014]: E0317 18:39:57.469631 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:57.534610 systemd[1]: Started sshd@27-37.27.32.129:22-61.206.202.179:41106.service - OpenSSH per-connection server daemon (61.206.202.179:41106). Mar 17 18:39:58.470284 kubelet[2014]: E0317 18:39:58.470229 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:39:59.278291 sshd[3900]: Invalid user ftpuser from 61.206.202.179 port 41106 Mar 17 18:39:59.471330 kubelet[2014]: E0317 18:39:59.471231 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:00.471972 kubelet[2014]: E0317 18:40:00.471888 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:00.722624 sshd[3900]: maximum authentication attempts exceeded for invalid user ftpuser from 61.206.202.179 port 41106 ssh2 [preauth] Mar 17 18:40:00.722624 sshd[3900]: Disconnecting invalid user ftpuser 61.206.202.179 port 41106: Too many authentication failures [preauth] Mar 17 18:40:00.727236 systemd[1]: sshd@27-37.27.32.129:22-61.206.202.179:41106.service: Deactivated successfully. Mar 17 18:40:01.296087 systemd[1]: Started sshd@28-37.27.32.129:22-61.206.202.179:41804.service - OpenSSH per-connection server daemon (61.206.202.179:41804). Mar 17 18:40:01.420211 kubelet[2014]: E0317 18:40:01.420128 2014 controller.go:195] "Failed to update lease" err="Put \"https://65.108.58.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:40:01.472992 kubelet[2014]: E0317 18:40:01.472898 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:01.717871 update_engine[1485]: I20250317 18:40:01.717655 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 18:40:01.718310 update_engine[1485]: I20250317 18:40:01.717990 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 18:40:01.718310 update_engine[1485]: I20250317 18:40:01.718253 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 18:40:01.718687 update_engine[1485]: E20250317 18:40:01.718634 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 18:40:01.718790 update_engine[1485]: I20250317 18:40:01.718727 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 18:40:02.474106 kubelet[2014]: E0317 18:40:02.474040 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:03.103102 sshd[3905]: Invalid user ftpuser from 61.206.202.179 port 41804 Mar 17 18:40:03.474659 kubelet[2014]: E0317 18:40:03.474499 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:04.253489 sshd[3905]: Received disconnect from 61.206.202.179 port 41804:11: disconnected by user [preauth] Mar 17 18:40:04.253489 sshd[3905]: Disconnected from invalid user ftpuser 61.206.202.179 port 41804 [preauth] Mar 17 18:40:04.256500 systemd[1]: sshd@28-37.27.32.129:22-61.206.202.179:41804.service: Deactivated successfully. Mar 17 18:40:04.475165 kubelet[2014]: E0317 18:40:04.475120 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:04.545029 systemd[1]: Started sshd@29-37.27.32.129:22-61.206.202.179:42396.service - OpenSSH per-connection server daemon (61.206.202.179:42396). Mar 17 18:40:05.475776 kubelet[2014]: E0317 18:40:05.475698 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:06.328933 sshd[3911]: Invalid user test1 from 61.206.202.179 port 42396 Mar 17 18:40:06.476413 kubelet[2014]: E0317 18:40:06.476372 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:07.477279 kubelet[2014]: E0317 18:40:07.477224 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:40:07.774030 sshd[3911]: maximum authentication attempts exceeded for invalid user test1 from 61.206.202.179 port 42396 ssh2 [preauth] Mar 17 18:40:07.774030 sshd[3911]: Disconnecting invalid user test1 61.206.202.179 port 42396: Too many authentication failures [preauth] Mar 17 18:40:07.776825 systemd[1]: sshd@29-37.27.32.129:22-61.206.202.179:42396.service: Deactivated successfully. Mar 17 18:40:08.338906 systemd[1]: Started sshd@30-37.27.32.129:22-61.206.202.179:43058.service - OpenSSH per-connection server daemon (61.206.202.179:43058). Mar 17 18:40:08.477462 kubelet[2014]: E0317 18:40:08.477369 2014 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"