Sep 4 17:37:15.877861 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:54:07 -00 2024 Sep 4 17:37:15.877882 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:37:15.877893 kernel: BIOS-provided physical RAM map: Sep 4 17:37:15.877899 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 17:37:15.877905 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 17:37:15.877911 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 17:37:15.877918 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Sep 4 17:37:15.877924 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Sep 4 17:37:15.877930 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 17:37:15.877939 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 17:37:15.877945 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 17:37:15.877951 kernel: NX (Execute Disable) protection: active Sep 4 17:37:15.877958 kernel: APIC: Static calls initialized Sep 4 17:37:15.877964 kernel: SMBIOS 2.8 present. Sep 4 17:37:15.877972 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 17:37:15.877981 kernel: Hypervisor detected: KVM Sep 4 17:37:15.877988 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:37:15.877995 kernel: kvm-clock: using sched offset of 2178363128 cycles Sep 4 17:37:15.878002 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:37:15.878009 kernel: tsc: Detected 2794.748 MHz processor Sep 4 17:37:15.878016 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:37:15.878023 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:37:15.878030 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Sep 4 17:37:15.878037 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 17:37:15.878046 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:37:15.878053 kernel: Using GB pages for direct mapping Sep 4 17:37:15.878060 kernel: ACPI: Early table checksum verification disabled Sep 4 17:37:15.878067 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Sep 4 17:37:15.878074 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:37:15.878081 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:37:15.878087 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:37:15.878094 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 17:37:15.878101 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:37:15.878110 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:37:15.878117 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:37:15.878124 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Sep 4 17:37:15.878130 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Sep 4 17:37:15.878137 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 17:37:15.878144 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Sep 4 17:37:15.878151 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Sep 4 17:37:15.878161 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Sep 4 17:37:15.878170 kernel: No NUMA configuration found Sep 4 17:37:15.878177 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Sep 4 17:37:15.878184 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Sep 4 17:37:15.878191 kernel: Zone ranges: Sep 4 17:37:15.878206 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:37:15.878214 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Sep 4 17:37:15.878223 kernel: Normal empty Sep 4 17:37:15.878230 kernel: Movable zone start for each node Sep 4 17:37:15.878237 kernel: Early memory node ranges Sep 4 17:37:15.878244 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 17:37:15.878251 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Sep 4 17:37:15.878258 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Sep 4 17:37:15.878265 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:37:15.878272 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 17:37:15.878280 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Sep 4 17:37:15.878289 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 17:37:15.878296 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:37:15.878303 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:37:15.878310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:37:15.878317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:37:15.878324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:37:15.878331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:37:15.878339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:37:15.878346 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:37:15.878355 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:37:15.878362 kernel: TSC deadline timer available Sep 4 17:37:15.878370 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:37:15.878377 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:37:15.878384 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:37:15.878391 kernel: kvm-guest: setup PV sched yield Sep 4 17:37:15.878398 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Sep 4 17:37:15.878487 kernel: Booting paravirtualized kernel on KVM Sep 4 17:37:15.878494 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:37:15.878501 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:37:15.878511 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:37:15.878519 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:37:15.878526 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:37:15.878533 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:37:15.878540 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:37:15.878548 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:37:15.878556 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:37:15.878563 kernel: random: crng init done Sep 4 17:37:15.878572 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:37:15.878579 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:37:15.878586 kernel: Fallback order for Node 0: 0 Sep 4 17:37:15.878593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Sep 4 17:37:15.878600 kernel: Policy zone: DMA32 Sep 4 17:37:15.878608 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:37:15.878615 kernel: Memory: 2434596K/2571756K available (12288K kernel code, 2304K rwdata, 22708K rodata, 42704K init, 2488K bss, 136900K reserved, 0K cma-reserved) Sep 4 17:37:15.878622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:37:15.878630 kernel: ftrace: allocating 37748 entries in 148 pages Sep 4 17:37:15.878639 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:37:15.878646 kernel: Dynamic Preempt: voluntary Sep 4 17:37:15.878653 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:37:15.878665 kernel: rcu: RCU event tracing is enabled. Sep 4 17:37:15.878672 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:37:15.878679 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:37:15.878687 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:37:15.878694 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:37:15.878701 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:37:15.878710 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:37:15.878717 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:37:15.878724 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:37:15.878732 kernel: Console: colour VGA+ 80x25 Sep 4 17:37:15.878739 kernel: printk: console [ttyS0] enabled Sep 4 17:37:15.878746 kernel: ACPI: Core revision 20230628 Sep 4 17:37:15.878753 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:37:15.878760 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:37:15.878768 kernel: x2apic enabled Sep 4 17:37:15.878777 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:37:15.878784 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:37:15.878791 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:37:15.878798 kernel: kvm-guest: setup PV IPIs Sep 4 17:37:15.878805 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:37:15.878813 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:37:15.878820 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 4 17:37:15.878827 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:37:15.878843 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:37:15.878851 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:37:15.878858 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:37:15.878866 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:37:15.878878 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:37:15.878888 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:37:15.878897 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:37:15.878906 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:37:15.878916 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:37:15.878928 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:37:15.878937 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:37:15.878947 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:37:15.878957 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:37:15.878966 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:37:15.878975 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:37:15.878984 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:37:15.878994 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:37:15.879006 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:37:15.879015 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:37:15.879024 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:37:15.879034 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:37:15.879043 kernel: landlock: Up and running. Sep 4 17:37:15.879052 kernel: SELinux: Initializing. Sep 4 17:37:15.879062 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:37:15.879071 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:37:15.879081 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:37:15.879093 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:37:15.879102 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:37:15.879110 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:37:15.879117 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:37:15.879125 kernel: ... version: 0 Sep 4 17:37:15.879132 kernel: ... bit width: 48 Sep 4 17:37:15.879140 kernel: ... generic registers: 6 Sep 4 17:37:15.879147 kernel: ... value mask: 0000ffffffffffff Sep 4 17:37:15.879154 kernel: ... max period: 00007fffffffffff Sep 4 17:37:15.879165 kernel: ... fixed-purpose events: 0 Sep 4 17:37:15.879172 kernel: ... event mask: 000000000000003f Sep 4 17:37:15.879180 kernel: signal: max sigframe size: 1776 Sep 4 17:37:15.879187 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:37:15.879201 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:37:15.879209 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:37:15.879216 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:37:15.879224 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:37:15.879231 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:37:15.879241 kernel: smpboot: Max logical packages: 1 Sep 4 17:37:15.879249 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 4 17:37:15.879256 kernel: devtmpfs: initialized Sep 4 17:37:15.879264 kernel: x86/mm: Memory block size: 128MB Sep 4 17:37:15.879271 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:37:15.879279 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:37:15.879287 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:37:15.879294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:37:15.879302 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:37:15.879311 kernel: audit: type=2000 audit(1725471435.478:1): state=initialized audit_enabled=0 res=1 Sep 4 17:37:15.879319 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:37:15.879326 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:37:15.879334 kernel: cpuidle: using governor menu Sep 4 17:37:15.879341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:37:15.879349 kernel: dca service started, version 1.12.1 Sep 4 17:37:15.879356 kernel: PCI: Using configuration type 1 for base access Sep 4 17:37:15.879364 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:37:15.879371 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:37:15.879381 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:37:15.879389 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:37:15.879396 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:37:15.879422 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:37:15.879429 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:37:15.879437 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:37:15.879444 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:37:15.879452 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:37:15.879459 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:37:15.879469 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:37:15.879477 kernel: ACPI: Interpreter enabled Sep 4 17:37:15.879484 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:37:15.879491 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:37:15.879499 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:37:15.879507 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:37:15.879514 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:37:15.879521 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:37:15.879746 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:37:15.879764 kernel: acpiphp: Slot [3] registered Sep 4 17:37:15.879772 kernel: acpiphp: Slot [4] registered Sep 4 17:37:15.879779 kernel: acpiphp: Slot [5] registered Sep 4 17:37:15.879786 kernel: acpiphp: Slot [6] registered Sep 4 17:37:15.879795 kernel: acpiphp: Slot [7] registered Sep 4 17:37:15.879804 kernel: acpiphp: Slot [8] registered Sep 4 17:37:15.879813 kernel: acpiphp: Slot [9] registered Sep 4 17:37:15.879823 kernel: acpiphp: Slot [10] registered Sep 4 17:37:15.879835 kernel: acpiphp: Slot [11] registered Sep 4 17:37:15.879844 kernel: acpiphp: Slot [12] registered Sep 4 17:37:15.879853 kernel: acpiphp: Slot [13] registered Sep 4 17:37:15.879862 kernel: acpiphp: Slot [14] registered Sep 4 17:37:15.879872 kernel: acpiphp: Slot [15] registered Sep 4 17:37:15.879881 kernel: acpiphp: Slot [16] registered Sep 4 17:37:15.879890 kernel: acpiphp: Slot [17] registered Sep 4 17:37:15.879899 kernel: acpiphp: Slot [18] registered Sep 4 17:37:15.879908 kernel: acpiphp: Slot [19] registered Sep 4 17:37:15.879918 kernel: acpiphp: Slot [20] registered Sep 4 17:37:15.879930 kernel: acpiphp: Slot [21] registered Sep 4 17:37:15.879939 kernel: acpiphp: Slot [22] registered Sep 4 17:37:15.879947 kernel: acpiphp: Slot [23] registered Sep 4 17:37:15.879955 kernel: acpiphp: Slot [24] registered Sep 4 17:37:15.879962 kernel: acpiphp: Slot [25] registered Sep 4 17:37:15.879969 kernel: acpiphp: Slot [26] registered Sep 4 17:37:15.879976 kernel: acpiphp: Slot [27] registered Sep 4 17:37:15.879984 kernel: acpiphp: Slot [28] registered Sep 4 17:37:15.879991 kernel: acpiphp: Slot [29] registered Sep 4 17:37:15.880001 kernel: acpiphp: Slot [30] registered Sep 4 17:37:15.880008 kernel: acpiphp: Slot [31] registered Sep 4 17:37:15.880015 kernel: PCI host bridge to bus 0000:00 Sep 4 17:37:15.880142 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:37:15.880264 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:37:15.880373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:37:15.880510 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:37:15.880618 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 4 17:37:15.880729 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:37:15.880922 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:37:15.881089 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:37:15.881278 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:37:15.881412 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:37:15.881536 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:37:15.881658 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:37:15.881775 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:37:15.881898 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:37:15.882057 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:37:15.882184 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 4 17:37:15.882314 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 4 17:37:15.882508 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:37:15.882641 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 4 17:37:15.882760 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 4 17:37:15.882897 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 4 17:37:15.883024 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:37:15.883150 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:37:15.883279 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:37:15.883462 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 4 17:37:15.883585 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 17:37:15.883712 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:37:15.883831 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:37:15.883949 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 4 17:37:15.884067 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 17:37:15.884194 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:37:15.884328 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:37:15.884469 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 4 17:37:15.884588 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 17:37:15.884705 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 4 17:37:15.884715 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:37:15.884723 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:37:15.884731 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:37:15.884738 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:37:15.884746 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:37:15.884757 kernel: iommu: Default domain type: Translated Sep 4 17:37:15.884765 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:37:15.884772 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:37:15.884779 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:37:15.884787 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 17:37:15.884794 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Sep 4 17:37:15.884911 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:37:15.885035 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:37:15.885159 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:37:15.885169 kernel: vgaarb: loaded Sep 4 17:37:15.885177 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:37:15.885185 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:37:15.885192 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:37:15.885208 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:37:15.885216 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:37:15.885223 kernel: pnp: PnP ACPI init Sep 4 17:37:15.885351 kernel: pnp 00:02: [dma 2] Sep 4 17:37:15.885366 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:37:15.885374 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:37:15.885381 kernel: NET: Registered PF_INET protocol family Sep 4 17:37:15.885389 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:37:15.885397 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:37:15.885421 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:37:15.885428 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:37:15.885436 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:37:15.885446 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:37:15.885454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:37:15.885461 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:37:15.885469 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:37:15.885477 kernel: NET: Registered PF_XDP protocol family Sep 4 17:37:15.885592 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:37:15.885702 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:37:15.885811 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:37:15.885922 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:37:15.886033 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 4 17:37:15.886154 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:37:15.886283 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:37:15.886294 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:37:15.886301 kernel: Initialise system trusted keyrings Sep 4 17:37:15.886309 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:37:15.886317 kernel: Key type asymmetric registered Sep 4 17:37:15.886324 kernel: Asymmetric key parser 'x509' registered Sep 4 17:37:15.886335 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:37:15.886342 kernel: io scheduler mq-deadline registered Sep 4 17:37:15.886349 kernel: io scheduler kyber registered Sep 4 17:37:15.886357 kernel: io scheduler bfq registered Sep 4 17:37:15.886364 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:37:15.886372 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:37:15.886470 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:37:15.886478 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:37:15.886485 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:37:15.886496 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:37:15.886504 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:37:15.886511 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:37:15.886518 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:37:15.886650 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:37:15.886661 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:37:15.886769 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:37:15.886877 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:37:15 UTC (1725471435) Sep 4 17:37:15.886992 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:37:15.887003 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:37:15.887010 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:37:15.887019 kernel: Segment Routing with IPv6 Sep 4 17:37:15.887028 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:37:15.887037 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:37:15.887046 kernel: Key type dns_resolver registered Sep 4 17:37:15.887053 kernel: IPI shorthand broadcast: enabled Sep 4 17:37:15.887061 kernel: sched_clock: Marking stable (687001980, 108696863)->(811147962, -15449119) Sep 4 17:37:15.887071 kernel: registered taskstats version 1 Sep 4 17:37:15.887079 kernel: Loading compiled-in X.509 certificates Sep 4 17:37:15.887086 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 8669771ab5e11f458b79e6634fe685dacc266b18' Sep 4 17:37:15.887094 kernel: Key type .fscrypt registered Sep 4 17:37:15.887101 kernel: Key type fscrypt-provisioning registered Sep 4 17:37:15.887109 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:37:15.887116 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:37:15.887124 kernel: ima: No architecture policies found Sep 4 17:37:15.887134 kernel: clk: Disabling unused clocks Sep 4 17:37:15.887141 kernel: Freeing unused kernel image (initmem) memory: 42704K Sep 4 17:37:15.887149 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:37:15.887156 kernel: Freeing unused kernel image (rodata/data gap) memory: 1868K Sep 4 17:37:15.887164 kernel: Run /init as init process Sep 4 17:37:15.887171 kernel: with arguments: Sep 4 17:37:15.887179 kernel: /init Sep 4 17:37:15.887186 kernel: with environment: Sep 4 17:37:15.887219 kernel: HOME=/ Sep 4 17:37:15.887229 kernel: TERM=linux Sep 4 17:37:15.887239 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:37:15.887249 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:37:15.887259 systemd[1]: Detected virtualization kvm. Sep 4 17:37:15.887268 systemd[1]: Detected architecture x86-64. Sep 4 17:37:15.887276 systemd[1]: Running in initrd. Sep 4 17:37:15.887284 systemd[1]: No hostname configured, using default hostname. Sep 4 17:37:15.887292 systemd[1]: Hostname set to . Sep 4 17:37:15.887302 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:37:15.887311 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:37:15.887319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:37:15.887328 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:37:15.887337 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:37:15.887345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:37:15.887353 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:37:15.887364 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:37:15.887374 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:37:15.887383 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:37:15.887391 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:37:15.887410 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:37:15.887419 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:37:15.887427 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:37:15.887438 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:37:15.887446 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:37:15.887455 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:37:15.887463 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:37:15.887471 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:37:15.887480 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:37:15.887488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:37:15.887496 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:37:15.887505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:37:15.887515 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:37:15.887523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:37:15.887531 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:37:15.887540 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:37:15.887548 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:37:15.887558 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:37:15.887567 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:37:15.887575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:37:15.887583 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:37:15.887592 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:37:15.887600 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:37:15.887628 systemd-journald[190]: Collecting audit messages is disabled. Sep 4 17:37:15.887649 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:37:15.887658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:37:15.887669 systemd-journald[190]: Journal started Sep 4 17:37:15.887687 systemd-journald[190]: Runtime Journal (/run/log/journal/289dafb6c5ab4ee79925a2bc8bc838ab) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:37:15.881288 systemd-modules-load[193]: Inserted module 'overlay' Sep 4 17:37:15.921087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:37:15.921110 kernel: Bridge firewalling registered Sep 4 17:37:15.907737 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 4 17:37:15.922851 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:37:15.924216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:37:15.926576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:37:15.946564 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:37:15.949701 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:37:15.952292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:37:15.955350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:37:15.964930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:37:15.967892 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:37:15.969342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:37:15.971482 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:37:15.981519 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:37:15.984952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:37:15.993863 dracut-cmdline[228]: dracut-dracut-053 Sep 4 17:37:15.996878 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ceda2dd706627da8006bcd6ae77ea155b2a7de6732e2c1c7ab4bed271400663d Sep 4 17:37:16.023894 systemd-resolved[233]: Positive Trust Anchors: Sep 4 17:37:16.023907 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:37:16.023937 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:37:16.029188 systemd-resolved[233]: Defaulting to hostname 'linux'. Sep 4 17:37:16.030262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:37:16.035423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:37:16.079427 kernel: SCSI subsystem initialized Sep 4 17:37:16.088426 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:37:16.099432 kernel: iscsi: registered transport (tcp) Sep 4 17:37:16.119664 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:37:16.119690 kernel: QLogic iSCSI HBA Driver Sep 4 17:37:16.171799 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:37:16.181527 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:37:16.208066 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:37:16.208113 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:37:16.208128 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:37:16.249428 kernel: raid6: avx2x4 gen() 30172 MB/s Sep 4 17:37:16.266426 kernel: raid6: avx2x2 gen() 30770 MB/s Sep 4 17:37:16.283563 kernel: raid6: avx2x1 gen() 25519 MB/s Sep 4 17:37:16.283583 kernel: raid6: using algorithm avx2x2 gen() 30770 MB/s Sep 4 17:37:16.301565 kernel: raid6: .... xor() 19773 MB/s, rmw enabled Sep 4 17:37:16.301595 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:37:16.321429 kernel: xor: automatically using best checksumming function avx Sep 4 17:37:16.472441 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:37:16.485387 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:37:16.492625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:37:16.504883 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 4 17:37:16.509693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:37:16.523550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:37:16.536445 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Sep 4 17:37:16.565359 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:37:16.578530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:37:16.642510 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:37:16.652570 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:37:16.667504 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:37:16.670707 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:37:16.671965 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:37:16.673157 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:37:16.683468 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:37:16.691806 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:37:16.691834 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:37:16.696483 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:37:16.696684 kernel: AES CTR mode by8 optimization enabled Sep 4 17:37:16.683631 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:37:16.698935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:37:16.709419 kernel: libata version 3.00 loaded. Sep 4 17:37:16.712270 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:37:16.712292 kernel: GPT:9289727 != 19775487 Sep 4 17:37:16.712302 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:37:16.712312 kernel: GPT:9289727 != 19775487 Sep 4 17:37:16.712714 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:37:16.712587 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:37:16.716708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:37:16.716724 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:37:16.712702 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:37:16.722089 kernel: scsi host0: ata_piix Sep 4 17:37:16.722283 kernel: scsi host1: ata_piix Sep 4 17:37:16.722444 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:37:16.722456 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:37:16.721107 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:37:16.724271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:37:16.725480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:37:16.728283 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:37:16.739421 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (467) Sep 4 17:37:16.739470 kernel: BTRFS: device fsid 0dc40443-7f77-4fa7-b5e4-579d4bba0772 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (469) Sep 4 17:37:16.741644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:37:16.766378 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:37:16.790759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:37:16.793322 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:37:16.798940 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:37:16.799017 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:37:16.803946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:37:16.818529 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:37:16.820334 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:37:16.828782 disk-uuid[547]: Primary Header is updated. Sep 4 17:37:16.828782 disk-uuid[547]: Secondary Entries is updated. Sep 4 17:37:16.828782 disk-uuid[547]: Secondary Header is updated. Sep 4 17:37:16.835066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:37:16.837426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:37:16.841574 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:37:16.879806 kernel: ata2: found unknown device (class 0) Sep 4 17:37:16.879858 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:37:16.882466 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:37:16.939457 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:37:16.939685 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:37:16.953464 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:37:17.840458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:37:17.841214 disk-uuid[548]: The operation has completed successfully. Sep 4 17:37:17.878180 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:37:17.878345 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:37:17.912636 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:37:17.917231 sh[583]: Success Sep 4 17:37:17.930477 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:37:17.970091 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:37:17.985113 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:37:17.987575 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:37:18.001285 kernel: BTRFS info (device dm-0): first mount of filesystem 0dc40443-7f77-4fa7-b5e4-579d4bba0772 Sep 4 17:37:18.001342 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:37:18.001357 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:37:18.002311 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:37:18.003046 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:37:18.008823 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:37:18.009641 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:37:18.022702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:37:18.025564 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:37:18.034773 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:37:18.034799 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:37:18.034810 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:37:18.038633 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:37:18.048584 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:37:18.050505 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:37:18.060957 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:37:18.066663 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:37:18.130714 ignition[675]: Ignition 2.19.0 Sep 4 17:37:18.130728 ignition[675]: Stage: fetch-offline Sep 4 17:37:18.130764 ignition[675]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:37:18.130776 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:37:18.130897 ignition[675]: parsed url from cmdline: "" Sep 4 17:37:18.130903 ignition[675]: no config URL provided Sep 4 17:37:18.130910 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:37:18.130923 ignition[675]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:37:18.130955 ignition[675]: op(1): [started] loading QEMU firmware config module Sep 4 17:37:18.130965 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:37:18.143716 ignition[675]: op(1): [finished] loading QEMU firmware config module Sep 4 17:37:18.157702 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:37:18.176594 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:37:18.188897 ignition[675]: parsing config with SHA512: 7af68acb839ae0d4384cbaaf4ebcbe8429d586cdf84f62e3e113dc3462d787b1b8175c24e847d47a9eb9a12308e2c90a6479df5925232cb1ed50c393473436a3 Sep 4 17:37:18.192419 unknown[675]: fetched base config from "system" Sep 4 17:37:18.192771 unknown[675]: fetched user config from "qemu" Sep 4 17:37:18.194053 ignition[675]: fetch-offline: fetch-offline passed Sep 4 17:37:18.194167 ignition[675]: Ignition finished successfully Sep 4 17:37:18.198764 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:37:18.207539 systemd-networkd[774]: lo: Link UP Sep 4 17:37:18.207553 systemd-networkd[774]: lo: Gained carrier Sep 4 17:37:18.209477 systemd-networkd[774]: Enumeration completed Sep 4 17:37:18.209627 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:37:18.209952 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:37:18.209957 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:37:18.210979 systemd-networkd[774]: eth0: Link UP Sep 4 17:37:18.210984 systemd-networkd[774]: eth0: Gained carrier Sep 4 17:37:18.210991 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:37:18.211453 systemd[1]: Reached target network.target - Network. Sep 4 17:37:18.211805 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:37:18.222649 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:37:18.226473 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:37:18.235283 ignition[777]: Ignition 2.19.0 Sep 4 17:37:18.235295 ignition[777]: Stage: kargs Sep 4 17:37:18.235466 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:37:18.235477 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:37:18.236247 ignition[777]: kargs: kargs passed Sep 4 17:37:18.236299 ignition[777]: Ignition finished successfully Sep 4 17:37:18.240755 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:37:18.254666 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:37:18.267592 ignition[786]: Ignition 2.19.0 Sep 4 17:37:18.267603 ignition[786]: Stage: disks Sep 4 17:37:18.267812 ignition[786]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:37:18.267826 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:37:18.270773 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:37:18.268947 ignition[786]: disks: disks passed Sep 4 17:37:18.273385 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:37:18.269004 ignition[786]: Ignition finished successfully Sep 4 17:37:18.274730 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:37:18.276608 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:37:18.278682 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:37:18.280481 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:37:18.291627 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:37:18.304765 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:37:18.311041 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:37:18.319590 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:37:18.403435 kernel: EXT4-fs (vda9): mounted filesystem bdbe0f61-2675-40b7-b9ae-5653402e9b23 r/w with ordered data mode. Quota mode: none. Sep 4 17:37:18.404258 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:37:18.405082 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:37:18.411546 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:37:18.413708 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:37:18.414936 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:37:18.414989 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:37:18.423105 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Sep 4 17:37:18.423128 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:37:18.415017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:37:18.429321 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:37:18.429354 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:37:18.429365 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:37:18.422422 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:37:18.431221 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:37:18.443661 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:37:18.478060 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:37:18.483070 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:37:18.488368 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:37:18.492788 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:37:18.572368 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:37:18.582532 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:37:18.585784 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:37:18.590432 kernel: BTRFS info (device vda6): last unmount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:37:18.609095 ignition[918]: INFO : Ignition 2.19.0 Sep 4 17:37:18.610166 ignition[918]: INFO : Stage: mount Sep 4 17:37:18.610166 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:37:18.610166 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:37:18.609448 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:37:18.616086 ignition[918]: INFO : mount: mount passed Sep 4 17:37:18.616086 ignition[918]: INFO : Ignition finished successfully Sep 4 17:37:18.612361 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:37:18.621526 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:37:19.000013 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:37:19.021534 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:37:19.029192 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (932) Sep 4 17:37:19.029225 kernel: BTRFS info (device vda6): first mount of filesystem b2463ce1-c756-4e78-b7f2-401dad24571d Sep 4 17:37:19.029237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:37:19.030682 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:37:19.033418 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:37:19.034333 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:37:19.052529 ignition[949]: INFO : Ignition 2.19.0 Sep 4 17:37:19.052529 ignition[949]: INFO : Stage: files Sep 4 17:37:19.054308 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:37:19.054308 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:37:19.054308 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:37:19.054308 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:37:19.054308 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:37:19.060950 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:37:19.060950 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:37:19.060950 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:37:19.060950 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:37:19.060950 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:37:19.056891 unknown[949]: wrote ssh authorized keys file for user: core Sep 4 17:37:19.115617 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:37:19.201054 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:37:19.203298 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Sep 4 17:37:19.572485 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:37:19.936373 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Sep 4 17:37:19.936373 ignition[949]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:37:19.940205 ignition[949]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:37:19.959758 ignition[949]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:37:19.963852 ignition[949]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:37:19.965461 ignition[949]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:37:19.965461 ignition[949]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:37:19.965461 ignition[949]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:37:19.965461 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:37:19.965461 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:37:19.965461 ignition[949]: INFO : files: files passed Sep 4 17:37:19.965461 ignition[949]: INFO : Ignition finished successfully Sep 4 17:37:19.966729 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:37:19.975787 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:37:19.977788 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:37:19.979606 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:37:19.979739 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:37:19.988733 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:37:19.991892 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:37:19.991892 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:37:19.995159 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:37:19.994841 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:37:19.996638 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:37:20.009730 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:37:20.035910 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:37:20.036063 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:37:20.037278 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:37:20.039452 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:37:20.039976 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:37:20.040883 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:37:20.060556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:37:20.065572 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:37:20.076504 systemd[1]: Stopped target network.target - Network. Sep 4 17:37:20.078367 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:37:20.079629 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:37:20.081900 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:37:20.083913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:37:20.084042 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:37:20.086391 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:37:20.087964 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:37:20.090014 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:37:20.092145 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:37:20.094138 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:37:20.096306 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:37:20.098458 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:37:20.100803 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:37:20.102801 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:37:20.105020 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:37:20.106817 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:37:20.106947 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:37:20.109283 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:37:20.110745 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:37:20.112842 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:37:20.112979 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:37:20.115107 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:37:20.115215 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:37:20.117621 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:37:20.117737 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:37:20.119578 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:37:20.121344 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:37:20.121516 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:37:20.124022 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:37:20.125790 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:37:20.127783 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:37:20.127889 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:37:20.129775 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:37:20.129865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:37:20.131911 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:37:20.132029 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:37:20.133979 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:37:20.134083 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:37:20.144547 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:37:20.146824 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:37:20.148398 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:37:20.150452 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:37:20.151186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:37:20.151388 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:37:20.152350 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:37:20.162536 ignition[1003]: INFO : Ignition 2.19.0 Sep 4 17:37:20.162536 ignition[1003]: INFO : Stage: umount Sep 4 17:37:20.162536 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:37:20.162536 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:37:20.152534 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:37:20.171453 ignition[1003]: INFO : umount: umount passed Sep 4 17:37:20.171453 ignition[1003]: INFO : Ignition finished successfully Sep 4 17:37:20.157452 systemd-networkd[774]: eth0: DHCPv6 lease lost Sep 4 17:37:20.160310 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:37:20.160456 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:37:20.162908 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:37:20.163010 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:37:20.164970 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:37:20.165078 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:37:20.167912 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:37:20.168049 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:37:20.172482 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:37:20.172533 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:37:20.174332 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:37:20.174383 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:37:20.175638 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:37:20.175688 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:37:20.177392 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:37:20.177454 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:37:20.179518 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:37:20.179569 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:37:20.188505 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:37:20.190039 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:37:20.190115 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:37:20.192540 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:37:20.192592 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:37:20.194705 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:37:20.194752 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:37:20.195896 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:37:20.195941 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:37:20.198171 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:37:20.203660 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:37:20.225996 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:37:20.226140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:37:20.228295 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:37:20.228477 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:37:20.230918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:37:20.230985 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:37:20.232260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:37:20.232303 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:37:20.234561 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:37:20.234611 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:37:20.236707 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:37:20.236752 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:37:20.238791 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:37:20.238837 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:37:20.248606 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:37:20.249770 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:37:20.249837 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:37:20.258480 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:37:20.258536 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:37:20.260760 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:37:20.260807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:37:20.263293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:37:20.263340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:37:20.265994 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:37:20.266120 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:37:20.323767 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:37:20.323925 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:37:20.326424 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:37:20.328332 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:37:20.328394 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:37:20.338560 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:37:20.347293 systemd[1]: Switching root. Sep 4 17:37:20.379997 systemd-journald[190]: Journal stopped Sep 4 17:37:21.421215 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Sep 4 17:37:21.421297 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:37:21.421322 kernel: SELinux: policy capability open_perms=1 Sep 4 17:37:21.421338 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:37:21.421362 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:37:21.421377 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:37:21.421461 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:37:21.421495 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:37:21.421511 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:37:21.421526 kernel: audit: type=1403 audit(1725471440.700:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:37:21.421542 systemd[1]: Successfully loaded SELinux policy in 41.119ms. Sep 4 17:37:21.421567 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.902ms. Sep 4 17:37:21.421585 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:37:21.421601 systemd[1]: Detected virtualization kvm. Sep 4 17:37:21.421624 systemd[1]: Detected architecture x86-64. Sep 4 17:37:21.421643 systemd[1]: Detected first boot. Sep 4 17:37:21.421659 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:37:21.421675 zram_generator::config[1047]: No configuration found. Sep 4 17:37:21.421693 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:37:21.421709 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:37:21.421725 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:37:21.421741 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:37:21.421760 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:37:21.421780 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:37:21.421796 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:37:21.421818 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:37:21.421834 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:37:21.421851 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:37:21.421867 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:37:21.421884 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:37:21.421900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:37:21.421920 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:37:21.421936 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:37:21.421952 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:37:21.422134 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:37:21.422159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:37:21.422176 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:37:21.422192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:37:21.422208 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:37:21.422224 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:37:21.422245 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:37:21.422261 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:37:21.422277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:37:21.422294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:37:21.422313 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:37:21.422330 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:37:21.422346 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:37:21.422362 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:37:21.422382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:37:21.422418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:37:21.422436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:37:21.422452 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:37:21.422468 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:37:21.422484 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:37:21.422500 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:37:21.422519 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:21.422535 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:37:21.422555 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:37:21.422571 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:37:21.422588 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:37:21.422603 systemd[1]: Reached target machines.target - Containers. Sep 4 17:37:21.422619 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:37:21.422635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:37:21.422651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:37:21.422667 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:37:21.422686 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:37:21.422702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:37:21.422717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:37:21.422733 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:37:21.422749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:37:21.422765 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:37:21.422781 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:37:21.422796 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:37:21.422812 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:37:21.422831 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:37:21.422848 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:37:21.422865 kernel: loop: module loaded Sep 4 17:37:21.422880 kernel: fuse: init (API version 7.39) Sep 4 17:37:21.422896 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:37:21.422912 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:37:21.422928 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:37:21.422944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:37:21.422960 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:37:21.422979 systemd[1]: Stopped verity-setup.service. Sep 4 17:37:21.422996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:21.423034 systemd-journald[1111]: Collecting audit messages is disabled. Sep 4 17:37:21.423070 kernel: ACPI: bus type drm_connector registered Sep 4 17:37:21.423086 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:37:21.423102 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:37:21.423118 systemd-journald[1111]: Journal started Sep 4 17:37:21.423149 systemd-journald[1111]: Runtime Journal (/run/log/journal/289dafb6c5ab4ee79925a2bc8bc838ab) is 6.0M, max 48.4M, 42.3M free. Sep 4 17:37:21.199019 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:37:21.212349 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:37:21.212773 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:37:21.427115 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:37:21.427998 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:37:21.429491 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:37:21.431021 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:37:21.432610 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:37:21.434210 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:37:21.436171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:37:21.438177 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:37:21.438416 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:37:21.440433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:37:21.440653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:37:21.442570 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:37:21.442792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:37:21.444693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:37:21.444907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:37:21.446877 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:37:21.447103 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:37:21.448944 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:37:21.449169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:37:21.451127 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:37:21.452931 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:37:21.455116 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:37:21.475113 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:37:21.485623 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:37:21.488791 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:37:21.490300 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:37:21.490344 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:37:21.492961 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:37:21.495964 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:37:21.501325 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:37:21.502718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:37:21.506162 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:37:21.509600 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:37:21.511154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:37:21.513659 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:37:21.515183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:37:21.522765 systemd-journald[1111]: Time spent on flushing to /var/log/journal/289dafb6c5ab4ee79925a2bc8bc838ab is 19.206ms for 944 entries. Sep 4 17:37:21.522765 systemd-journald[1111]: System Journal (/var/log/journal/289dafb6c5ab4ee79925a2bc8bc838ab) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:37:21.563373 systemd-journald[1111]: Received client request to flush runtime journal. Sep 4 17:37:21.563443 kernel: loop0: detected capacity change from 0 to 211296 Sep 4 17:37:21.519054 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:37:21.523996 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:37:21.531609 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:37:21.535577 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:37:21.537990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:37:21.540051 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:37:21.542216 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:37:21.550853 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:37:21.562381 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:37:21.570431 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:37:21.575247 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:37:21.579188 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:37:21.581176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:37:21.584139 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:37:21.594501 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:37:21.627356 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Sep 4 17:37:21.627379 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Sep 4 17:37:21.635287 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:37:21.647548 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:37:21.651432 kernel: loop1: detected capacity change from 0 to 89336 Sep 4 17:37:21.678474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:37:21.680621 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:37:21.686435 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:37:21.693827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:37:21.696449 kernel: loop2: detected capacity change from 0 to 140728 Sep 4 17:37:21.709828 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Sep 4 17:37:21.709849 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Sep 4 17:37:21.716119 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:37:21.743448 kernel: loop3: detected capacity change from 0 to 211296 Sep 4 17:37:21.751681 kernel: loop4: detected capacity change from 0 to 89336 Sep 4 17:37:21.761421 kernel: loop5: detected capacity change from 0 to 140728 Sep 4 17:37:21.770508 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:37:21.771157 (sd-merge)[1187]: Merged extensions into '/usr'. Sep 4 17:37:21.775000 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:37:21.775015 systemd[1]: Reloading... Sep 4 17:37:21.848425 zram_generator::config[1214]: No configuration found. Sep 4 17:37:21.916676 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:37:21.982979 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:37:22.033725 systemd[1]: Reloading finished in 258 ms. Sep 4 17:37:22.070669 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:37:22.072234 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:37:22.084746 systemd[1]: Starting ensure-sysext.service... Sep 4 17:37:22.087788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:37:22.092767 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:37:22.092780 systemd[1]: Reloading... Sep 4 17:37:22.110459 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:37:22.111171 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:37:22.112313 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:37:22.112691 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 4 17:37:22.112831 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 4 17:37:22.116018 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:37:22.116035 systemd-tmpfiles[1249]: Skipping /boot Sep 4 17:37:22.131337 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:37:22.131353 systemd-tmpfiles[1249]: Skipping /boot Sep 4 17:37:22.142426 zram_generator::config[1273]: No configuration found. Sep 4 17:37:22.256259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:37:22.304701 systemd[1]: Reloading finished in 211 ms. Sep 4 17:37:22.321769 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:37:22.333387 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:37:22.343284 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:37:22.346151 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:37:22.348862 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:37:22.354442 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:37:22.358679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:37:22.364155 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:37:22.367523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:22.367696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:37:22.369657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:37:22.371944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:37:22.376569 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:37:22.377993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:37:22.382605 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:37:22.383712 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:22.384779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:37:22.384955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:37:22.386885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:37:22.387178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:37:22.391282 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:37:22.391462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:37:22.393043 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:37:22.400607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:37:22.400807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:37:22.401145 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Sep 4 17:37:22.406798 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:37:22.410280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:22.411547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:37:22.413192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:37:22.416785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:37:22.421226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:37:22.422708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:37:22.422799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:22.423676 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:37:22.426516 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:37:22.428604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:37:22.428876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:37:22.431061 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:37:22.431331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:37:22.432280 augenrules[1345]: No rules Sep 4 17:37:22.433745 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:37:22.435987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:37:22.436200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:37:22.447768 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:37:22.450766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:37:22.467827 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:37:22.481741 systemd[1]: Finished ensure-sysext.service. Sep 4 17:37:22.486157 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:22.486303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:37:22.494627 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:37:22.497693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:37:22.501783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:37:22.507306 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1368) Sep 4 17:37:22.507507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:37:22.508927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:37:22.512147 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:37:22.521594 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:37:22.522787 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:37:22.522829 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:37:22.523425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:37:22.523628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:37:22.525124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:37:22.525330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:37:22.527154 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:37:22.527358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:37:22.529825 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:37:22.534441 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1373) Sep 4 17:37:22.538436 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1373) Sep 4 17:37:22.541024 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:37:22.542472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:37:22.559295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:37:22.559353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:37:22.561419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 17:37:22.563432 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 4 17:37:22.567424 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:37:22.570205 systemd-resolved[1316]: Positive Trust Anchors: Sep 4 17:37:22.570219 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:37:22.570250 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:37:22.574688 systemd-resolved[1316]: Defaulting to hostname 'linux'. Sep 4 17:37:22.576648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:37:22.578143 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:37:22.584050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:37:22.591584 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:37:22.604421 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:37:22.646914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:37:22.663201 systemd-networkd[1386]: lo: Link UP Sep 4 17:37:22.691178 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:37:22.663215 systemd-networkd[1386]: lo: Gained carrier Sep 4 17:37:22.664931 systemd-networkd[1386]: Enumeration completed Sep 4 17:37:22.685486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:37:22.686892 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:37:22.688680 systemd[1]: Reached target network.target - Network. Sep 4 17:37:22.691158 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:37:22.695938 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:37:22.697507 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:37:22.698354 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:37:22.698362 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:37:22.699814 kernel: kvm_amd: TSC scaling supported Sep 4 17:37:22.699965 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:37:22.699979 kernel: kvm_amd: Nested Paging enabled Sep 4 17:37:22.699997 kernel: kvm_amd: LBR virtualization supported Sep 4 17:37:22.700238 systemd-networkd[1386]: eth0: Link UP Sep 4 17:37:22.701484 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:37:22.701508 kernel: kvm_amd: Virtual GIF supported Sep 4 17:37:22.701132 systemd-networkd[1386]: eth0: Gained carrier Sep 4 17:37:22.701147 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:37:22.722473 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:37:22.723961 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Sep 4 17:37:22.724423 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:37:23.387900 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:37:23.387957 systemd-timesyncd[1388]: Initial clock synchronization to Wed 2024-09-04 17:37:23.387810 UTC. Sep 4 17:37:23.388008 systemd-resolved[1316]: Clock change detected. Flushing caches. Sep 4 17:37:23.425256 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:37:23.459554 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:37:23.461304 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:37:23.468990 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:37:23.503304 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:37:23.504882 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:37:23.506047 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:37:23.507261 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:37:23.508570 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:37:23.510061 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:37:23.511301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:37:23.512754 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:37:23.514037 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:37:23.514063 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:37:23.515005 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:37:23.516779 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:37:23.519573 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:37:23.530983 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:37:23.533357 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:37:23.534911 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:37:23.536151 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:37:23.537153 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:37:23.538159 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:37:23.538188 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:37:23.539164 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:37:23.541266 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:37:23.544452 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:37:23.545446 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:37:23.549066 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:37:23.550199 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:37:23.553029 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:37:23.553774 jq[1424]: false Sep 4 17:37:23.555995 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:37:23.558136 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:37:23.563497 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:37:23.570655 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:37:23.572196 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:37:23.572617 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:37:23.575551 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:37:23.582485 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:37:23.584769 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:37:23.587240 dbus-daemon[1423]: [system] SELinux support is enabled Sep 4 17:37:23.592565 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:37:23.593152 extend-filesystems[1425]: Found loop3 Sep 4 17:37:23.595772 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:37:23.597464 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:37:23.598679 extend-filesystems[1425]: Found loop4 Sep 4 17:37:23.598679 extend-filesystems[1425]: Found loop5 Sep 4 17:37:23.598679 extend-filesystems[1425]: Found sr0 Sep 4 17:37:23.598679 extend-filesystems[1425]: Found vda Sep 4 17:37:23.598679 extend-filesystems[1425]: Found vda1 Sep 4 17:37:23.598679 extend-filesystems[1425]: Found vda2 Sep 4 17:37:23.597845 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:37:23.608455 extend-filesystems[1425]: Found vda3 Sep 4 17:37:23.608455 extend-filesystems[1425]: Found usr Sep 4 17:37:23.608455 extend-filesystems[1425]: Found vda4 Sep 4 17:37:23.608455 extend-filesystems[1425]: Found vda6 Sep 4 17:37:23.608455 extend-filesystems[1425]: Found vda7 Sep 4 17:37:23.608455 extend-filesystems[1425]: Found vda9 Sep 4 17:37:23.608455 extend-filesystems[1425]: Checking size of /dev/vda9 Sep 4 17:37:23.598057 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:37:23.618053 update_engine[1435]: I0904 17:37:23.614423 1435 main.cc:92] Flatcar Update Engine starting Sep 4 17:37:23.600142 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:37:23.600349 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:37:23.621387 update_engine[1435]: I0904 17:37:23.620494 1435 update_check_scheduler.cc:74] Next update check in 6m39s Sep 4 17:37:23.621539 jq[1440]: true Sep 4 17:37:23.621264 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:37:23.621755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:37:23.621787 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:37:23.623764 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:37:23.623792 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:37:23.627617 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:37:23.632235 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:37:23.636597 tar[1443]: linux-amd64/helm Sep 4 17:37:23.640231 extend-filesystems[1425]: Resized partition /dev/vda9 Sep 4 17:37:23.646360 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1357) Sep 4 17:37:23.650371 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:37:23.652516 jq[1454]: true Sep 4 17:37:23.653001 systemd-logind[1432]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:37:23.653038 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:37:23.660207 systemd-logind[1432]: New seat seat0. Sep 4 17:37:23.662242 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:37:23.669227 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:37:23.691591 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:37:23.716237 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:37:23.716237 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:37:23.716237 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:37:23.724673 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Sep 4 17:37:23.720177 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:37:23.720462 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:37:23.740782 bash[1477]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:37:23.743862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:37:23.746061 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:37:23.749825 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:37:23.813958 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:37:23.837042 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:37:23.840262 containerd[1445]: time="2024-09-04T17:37:23.840193655Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:37:23.847982 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:37:23.856658 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:37:23.857091 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:37:23.862262 containerd[1445]: time="2024-09-04T17:37:23.862222988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:37:23.863942 containerd[1445]: time="2024-09-04T17:37:23.863846593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:37:23.863942 containerd[1445]: time="2024-09-04T17:37:23.863873042Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:37:23.863942 containerd[1445]: time="2024-09-04T17:37:23.863887950Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:37:23.864086 containerd[1445]: time="2024-09-04T17:37:23.864069300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:37:23.864150 containerd[1445]: time="2024-09-04T17:37:23.864135334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:37:23.864233 containerd[1445]: time="2024-09-04T17:37:23.864215394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:37:23.864254 containerd[1445]: time="2024-09-04T17:37:23.864231014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:37:23.864490 containerd[1445]: time="2024-09-04T17:37:23.864471505Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:37:23.864616 containerd[1445]: time="2024-09-04T17:37:23.864602500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:37:23.864658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:37:23.865102 containerd[1445]: time="2024-09-04T17:37:23.864653426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:37:23.865102 containerd[1445]: time="2024-09-04T17:37:23.864665128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:37:23.865102 containerd[1445]: time="2024-09-04T17:37:23.864759084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:37:23.865102 containerd[1445]: time="2024-09-04T17:37:23.864989236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:37:23.865405 containerd[1445]: time="2024-09-04T17:37:23.865387222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:37:23.865457 containerd[1445]: time="2024-09-04T17:37:23.865445291Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:37:23.865592 containerd[1445]: time="2024-09-04T17:37:23.865577369Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:37:23.865693 containerd[1445]: time="2024-09-04T17:37:23.865679961Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:37:23.875306 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:37:23.883590 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:37:23.885750 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:37:23.887027 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:37:24.007873 containerd[1445]: time="2024-09-04T17:37:24.007782995Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:37:24.007873 containerd[1445]: time="2024-09-04T17:37:24.007873314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:37:24.007873 containerd[1445]: time="2024-09-04T17:37:24.007891488Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:37:24.008034 containerd[1445]: time="2024-09-04T17:37:24.007909182Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:37:24.008034 containerd[1445]: time="2024-09-04T17:37:24.007924500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:37:24.008189 containerd[1445]: time="2024-09-04T17:37:24.008145525Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:37:24.008446 containerd[1445]: time="2024-09-04T17:37:24.008411143Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:37:24.008586 containerd[1445]: time="2024-09-04T17:37:24.008521490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:37:24.008586 containerd[1445]: time="2024-09-04T17:37:24.008535907Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:37:24.008586 containerd[1445]: time="2024-09-04T17:37:24.008549052Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:37:24.008586 containerd[1445]: time="2024-09-04T17:37:24.008561785Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008586 containerd[1445]: time="2024-09-04T17:37:24.008573778Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008586 containerd[1445]: time="2024-09-04T17:37:24.008585350Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008599636Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008613412Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008625154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008636636Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008649029Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008666712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008679406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008699313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008712849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008724951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008737806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008746 containerd[1445]: time="2024-09-04T17:37:24.008752804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008766479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008779434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008793019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008805382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008816693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008829097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008844035Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008861648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008872949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008883228Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008924816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008942029Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:37:24.008969 containerd[1445]: time="2024-09-04T17:37:24.008962006Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:37:24.009192 containerd[1445]: time="2024-09-04T17:37:24.008973107Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:37:24.009192 containerd[1445]: time="2024-09-04T17:37:24.008983416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.009192 containerd[1445]: time="2024-09-04T17:37:24.008995499Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:37:24.009192 containerd[1445]: time="2024-09-04T17:37:24.009010607Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:37:24.009192 containerd[1445]: time="2024-09-04T17:37:24.009020746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:37:24.009325 containerd[1445]: time="2024-09-04T17:37:24.009271727Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:37:24.009325 containerd[1445]: time="2024-09-04T17:37:24.009324887Z" level=info msg="Connect containerd service" Sep 4 17:37:24.009325 containerd[1445]: time="2024-09-04T17:37:24.009378237Z" level=info msg="using legacy CRI server" Sep 4 17:37:24.009325 containerd[1445]: time="2024-09-04T17:37:24.009385190Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:37:24.009325 containerd[1445]: time="2024-09-04T17:37:24.009513430Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:37:24.010129 containerd[1445]: time="2024-09-04T17:37:24.010103046Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:37:24.011539 containerd[1445]: time="2024-09-04T17:37:24.010555655Z" level=info msg="Start subscribing containerd event" Sep 4 17:37:24.011539 containerd[1445]: time="2024-09-04T17:37:24.010980912Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:37:24.011539 containerd[1445]: time="2024-09-04T17:37:24.011054510Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:37:24.012430 containerd[1445]: time="2024-09-04T17:37:24.012289045Z" level=info msg="Start recovering state" Sep 4 17:37:24.012590 containerd[1445]: time="2024-09-04T17:37:24.012559503Z" level=info msg="Start event monitor" Sep 4 17:37:24.012622 containerd[1445]: time="2024-09-04T17:37:24.012588307Z" level=info msg="Start snapshots syncer" Sep 4 17:37:24.012622 containerd[1445]: time="2024-09-04T17:37:24.012605228Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:37:24.012622 containerd[1445]: time="2024-09-04T17:37:24.012616089Z" level=info msg="Start streaming server" Sep 4 17:37:24.012930 containerd[1445]: time="2024-09-04T17:37:24.012904620Z" level=info msg="containerd successfully booted in 0.173625s" Sep 4 17:37:24.013056 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:37:24.072081 tar[1443]: linux-amd64/LICENSE Sep 4 17:37:24.072175 tar[1443]: linux-amd64/README.md Sep 4 17:37:24.085045 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:37:24.233313 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:37:24.247894 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:47600.service - OpenSSH per-connection server daemon (10.0.0.1:47600). Sep 4 17:37:24.301469 sshd[1515]: Accepted publickey for core from 10.0.0.1 port 47600 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:24.304378 sshd[1515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:24.313509 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:37:24.323688 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:37:24.327795 systemd-logind[1432]: New session 1 of user core. Sep 4 17:37:24.337713 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:37:24.353808 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:37:24.358436 (systemd)[1519]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:37:24.479033 systemd[1519]: Queued start job for default target default.target. Sep 4 17:37:24.488706 systemd[1519]: Created slice app.slice - User Application Slice. Sep 4 17:37:24.488735 systemd[1519]: Reached target paths.target - Paths. Sep 4 17:37:24.488749 systemd[1519]: Reached target timers.target - Timers. Sep 4 17:37:24.490465 systemd[1519]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:37:24.505183 systemd[1519]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:37:24.505363 systemd[1519]: Reached target sockets.target - Sockets. Sep 4 17:37:24.505388 systemd[1519]: Reached target basic.target - Basic System. Sep 4 17:37:24.505435 systemd[1519]: Reached target default.target - Main User Target. Sep 4 17:37:24.505477 systemd[1519]: Startup finished in 139ms. Sep 4 17:37:24.506005 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:37:24.508873 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:37:24.573720 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:47604.service - OpenSSH per-connection server daemon (10.0.0.1:47604). Sep 4 17:37:24.614797 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 47604 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:24.616731 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:24.621945 systemd-logind[1432]: New session 2 of user core. Sep 4 17:37:24.632588 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:37:24.689148 sshd[1530]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:24.707589 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:47604.service: Deactivated successfully. Sep 4 17:37:24.709542 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:37:24.711300 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:37:24.721881 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:47612.service - OpenSSH per-connection server daemon (10.0.0.1:47612). Sep 4 17:37:24.724319 systemd-logind[1432]: Removed session 2. Sep 4 17:37:24.757041 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 47612 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:24.758840 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:24.763051 systemd-logind[1432]: New session 3 of user core. Sep 4 17:37:24.769520 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:37:24.825486 sshd[1537]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:24.829809 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:47612.service: Deactivated successfully. Sep 4 17:37:24.831593 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:37:24.832217 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:37:24.833082 systemd-logind[1432]: Removed session 3. Sep 4 17:37:24.963616 systemd-networkd[1386]: eth0: Gained IPv6LL Sep 4 17:37:24.966821 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:37:24.968794 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:37:24.978653 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:37:24.981145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:24.983612 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:37:25.003772 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:37:25.004051 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:37:25.005891 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:37:25.007983 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:37:25.592084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:25.593734 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:37:25.596020 systemd[1]: Startup finished in 819ms (kernel) + 5.006s (initrd) + 4.271s (userspace) = 10.097s. Sep 4 17:37:25.617701 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:37:26.115990 kubelet[1566]: E0904 17:37:26.115910 1566 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:37:26.120886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:37:26.121119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:37:26.121473 systemd[1]: kubelet.service: Consumed 1.008s CPU time. Sep 4 17:37:34.835804 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:58030.service - OpenSSH per-connection server daemon (10.0.0.1:58030). Sep 4 17:37:34.871128 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 58030 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:34.872724 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:34.876200 systemd-logind[1432]: New session 4 of user core. Sep 4 17:37:34.886456 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:37:34.939692 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:34.962619 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:58030.service: Deactivated successfully. Sep 4 17:37:34.964160 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:37:34.965546 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:37:34.974566 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:58044.service - OpenSSH per-connection server daemon (10.0.0.1:58044). Sep 4 17:37:34.975318 systemd-logind[1432]: Removed session 4. Sep 4 17:37:35.009667 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 58044 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:35.011240 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:35.014742 systemd-logind[1432]: New session 5 of user core. Sep 4 17:37:35.024449 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:37:35.073400 sshd[1587]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:35.088776 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:58044.service: Deactivated successfully. Sep 4 17:37:35.090217 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:37:35.091827 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:37:35.096650 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:58056.service - OpenSSH per-connection server daemon (10.0.0.1:58056). Sep 4 17:37:35.097581 systemd-logind[1432]: Removed session 5. Sep 4 17:37:35.127058 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 58056 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:35.128424 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:35.132239 systemd-logind[1432]: New session 6 of user core. Sep 4 17:37:35.140456 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:37:35.194540 sshd[1594]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:35.206960 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:58056.service: Deactivated successfully. Sep 4 17:37:35.208703 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:37:35.210217 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:37:35.211470 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:58064.service - OpenSSH per-connection server daemon (10.0.0.1:58064). Sep 4 17:37:35.212106 systemd-logind[1432]: Removed session 6. Sep 4 17:37:35.245880 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 58064 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:35.247315 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:35.250876 systemd-logind[1432]: New session 7 of user core. Sep 4 17:37:35.260459 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:37:35.317027 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:37:35.317392 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:37:35.346389 sudo[1604]: pam_unix(sudo:session): session closed for user root Sep 4 17:37:35.348131 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:35.359049 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:58064.service: Deactivated successfully. Sep 4 17:37:35.360806 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:37:35.362424 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:37:35.378722 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:58080.service - OpenSSH per-connection server daemon (10.0.0.1:58080). Sep 4 17:37:35.379774 systemd-logind[1432]: Removed session 7. Sep 4 17:37:35.411578 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 58080 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:35.413450 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:35.417436 systemd-logind[1432]: New session 8 of user core. Sep 4 17:37:35.431457 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:37:35.485349 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:37:35.485701 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:37:35.489599 sudo[1613]: pam_unix(sudo:session): session closed for user root Sep 4 17:37:35.497513 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:37:35.497959 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:37:35.515548 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:37:35.517173 auditctl[1616]: No rules Sep 4 17:37:35.518404 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:37:35.518688 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:37:35.520397 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:37:35.551483 augenrules[1634]: No rules Sep 4 17:37:35.553400 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:37:35.554737 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 4 17:37:35.556845 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 4 17:37:35.571958 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:58080.service: Deactivated successfully. Sep 4 17:37:35.573690 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:37:35.575278 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:37:35.581678 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:58094.service - OpenSSH per-connection server daemon (10.0.0.1:58094). Sep 4 17:37:35.582427 systemd-logind[1432]: Removed session 8. Sep 4 17:37:35.618000 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 58094 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:37:35.619921 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:37:35.623445 systemd-logind[1432]: New session 9 of user core. Sep 4 17:37:35.634464 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:37:35.687568 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:37:35.687911 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:37:35.802569 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:37:35.802724 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:37:36.371296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:37:36.405607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:36.482321 dockerd[1655]: time="2024-09-04T17:37:36.482025773Z" level=info msg="Starting up" Sep 4 17:37:36.574870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:36.581732 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:37:36.926996 kubelet[1687]: E0904 17:37:36.926920 1687 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:37:36.929572 dockerd[1655]: time="2024-09-04T17:37:36.929524232Z" level=info msg="Loading containers: start." Sep 4 17:37:36.936095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:37:36.936299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:37:37.065368 kernel: Initializing XFRM netlink socket Sep 4 17:37:37.152793 systemd-networkd[1386]: docker0: Link UP Sep 4 17:37:37.304098 dockerd[1655]: time="2024-09-04T17:37:37.303969100Z" level=info msg="Loading containers: done." Sep 4 17:37:37.319563 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck265188103-merged.mount: Deactivated successfully. Sep 4 17:37:37.426007 dockerd[1655]: time="2024-09-04T17:37:37.425954867Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:37:37.426179 dockerd[1655]: time="2024-09-04T17:37:37.426105259Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:37:37.426279 dockerd[1655]: time="2024-09-04T17:37:37.426256332Z" level=info msg="Daemon has completed initialization" Sep 4 17:37:37.482426 dockerd[1655]: time="2024-09-04T17:37:37.482312117Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:37:37.482605 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:37:38.332191 containerd[1445]: time="2024-09-04T17:37:38.332119950Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:37:39.779431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917015929.mount: Deactivated successfully. Sep 4 17:37:42.409753 containerd[1445]: time="2024-09-04T17:37:42.409662067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:42.419501 containerd[1445]: time="2024-09-04T17:37:42.419397033Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=35232949" Sep 4 17:37:42.428654 containerd[1445]: time="2024-09-04T17:37:42.428599822Z" level=info msg="ImageCreate event name:\"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:42.442271 containerd[1445]: time="2024-09-04T17:37:42.442213603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:42.443424 containerd[1445]: time="2024-09-04T17:37:42.443372927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"35229749\" in 4.111204506s" Sep 4 17:37:42.443480 containerd[1445]: time="2024-09-04T17:37:42.443428391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:ea7e9c4af6a6f4f2fc0b86f81d102bf60167b3cbd4ce7d1545833b0283ab80b7\"" Sep 4 17:37:42.471470 containerd[1445]: time="2024-09-04T17:37:42.471428691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:37:46.308832 containerd[1445]: time="2024-09-04T17:37:46.308752112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:46.337116 containerd[1445]: time="2024-09-04T17:37:46.337079176Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=32206206" Sep 4 17:37:46.378295 containerd[1445]: time="2024-09-04T17:37:46.378262880Z" level=info msg="ImageCreate event name:\"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:46.448467 containerd[1445]: time="2024-09-04T17:37:46.448428355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:46.449514 containerd[1445]: time="2024-09-04T17:37:46.449473595Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"33756152\" in 3.978003657s" Sep 4 17:37:46.449564 containerd[1445]: time="2024-09-04T17:37:46.449515374Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:b469e8ed7312f97f28340218ee5884606f9998ad73d3692a6078a2692253589a\"" Sep 4 17:37:46.475256 containerd[1445]: time="2024-09-04T17:37:46.475211072Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:37:47.186572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:37:47.200546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:47.384718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:47.393953 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:37:47.674385 kubelet[1907]: E0904 17:37:47.673646 1907 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:37:47.678669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:37:47.678921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:37:47.976090 containerd[1445]: time="2024-09-04T17:37:47.975957808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:47.976817 containerd[1445]: time="2024-09-04T17:37:47.976750425Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=17321507" Sep 4 17:37:47.978078 containerd[1445]: time="2024-09-04T17:37:47.978040815Z" level=info msg="ImageCreate event name:\"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:47.981190 containerd[1445]: time="2024-09-04T17:37:47.981162970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:47.982255 containerd[1445]: time="2024-09-04T17:37:47.982221755Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"18871471\" in 1.506967292s" Sep 4 17:37:47.982309 containerd[1445]: time="2024-09-04T17:37:47.982257202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:e932331104a0d08ad33e8c298f0c2a9a23378869c8fc0915df299b611c196f21\"" Sep 4 17:37:48.005841 containerd[1445]: time="2024-09-04T17:37:48.005793541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:37:49.305417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966750860.mount: Deactivated successfully. Sep 4 17:37:50.395604 containerd[1445]: time="2024-09-04T17:37:50.395548755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:50.396300 containerd[1445]: time="2024-09-04T17:37:50.396252384Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=28600380" Sep 4 17:37:50.397403 containerd[1445]: time="2024-09-04T17:37:50.397376082Z" level=info msg="ImageCreate event name:\"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:50.399379 containerd[1445]: time="2024-09-04T17:37:50.399347168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:50.399936 containerd[1445]: time="2024-09-04T17:37:50.399896098Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"28599399\" in 2.394064766s" Sep 4 17:37:50.399963 containerd[1445]: time="2024-09-04T17:37:50.399939098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:b6e10835ec72a48862d901a23b7c4c924300c3f6cfe89cd6031533b67e1f4e54\"" Sep 4 17:37:50.422963 containerd[1445]: time="2024-09-04T17:37:50.422915577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:37:50.938032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938288614.mount: Deactivated successfully. Sep 4 17:37:51.873724 containerd[1445]: time="2024-09-04T17:37:51.873663488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:51.874525 containerd[1445]: time="2024-09-04T17:37:51.874475020Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:37:51.875500 containerd[1445]: time="2024-09-04T17:37:51.875468122Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:51.878149 containerd[1445]: time="2024-09-04T17:37:51.878115487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:51.879035 containerd[1445]: time="2024-09-04T17:37:51.879003742Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.456053921s" Sep 4 17:37:51.879072 containerd[1445]: time="2024-09-04T17:37:51.879035843Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:37:51.904217 containerd[1445]: time="2024-09-04T17:37:51.904185016Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:37:53.203001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013523082.mount: Deactivated successfully. Sep 4 17:37:53.216700 containerd[1445]: time="2024-09-04T17:37:53.216645849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:53.218995 containerd[1445]: time="2024-09-04T17:37:53.218934010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:37:53.220767 containerd[1445]: time="2024-09-04T17:37:53.220728856Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:53.223721 containerd[1445]: time="2024-09-04T17:37:53.223681984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:53.224345 containerd[1445]: time="2024-09-04T17:37:53.224299713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.320083267s" Sep 4 17:37:53.224391 containerd[1445]: time="2024-09-04T17:37:53.224350207Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:37:53.248703 containerd[1445]: time="2024-09-04T17:37:53.248654887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:37:54.124734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365850894.mount: Deactivated successfully. Sep 4 17:37:57.326505 containerd[1445]: time="2024-09-04T17:37:57.326442978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:57.327123 containerd[1445]: time="2024-09-04T17:37:57.327055916Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:37:57.328391 containerd[1445]: time="2024-09-04T17:37:57.328317750Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:57.331301 containerd[1445]: time="2024-09-04T17:37:57.331273138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:37:57.332470 containerd[1445]: time="2024-09-04T17:37:57.332434861Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.083737583s" Sep 4 17:37:57.332470 containerd[1445]: time="2024-09-04T17:37:57.332465900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:37:57.712079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:37:57.725532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:57.872009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:57.877600 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:37:57.948927 kubelet[2095]: E0904 17:37:57.948844 2095 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:37:57.953654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:37:57.954098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:37:59.837794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:37:59.848547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:37:59.865522 systemd[1]: Reloading requested from client PID 2147 ('systemctl') (unit session-9.scope)... Sep 4 17:37:59.865543 systemd[1]: Reloading... Sep 4 17:37:59.947387 zram_generator::config[2190]: No configuration found. Sep 4 17:38:00.459613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:38:00.534192 systemd[1]: Reloading finished in 668 ms. Sep 4 17:38:00.582383 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:38:00.586704 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:38:00.586939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:38:00.588500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:38:00.741987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:38:00.746180 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:38:00.786392 kubelet[2234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:38:00.786392 kubelet[2234]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:38:00.786392 kubelet[2234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:38:00.786776 kubelet[2234]: I0904 17:38:00.786433 2234 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:38:01.131466 kubelet[2234]: I0904 17:38:01.131362 2234 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:38:01.131466 kubelet[2234]: I0904 17:38:01.131396 2234 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:38:01.131653 kubelet[2234]: I0904 17:38:01.131593 2234 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:38:01.146478 kubelet[2234]: E0904 17:38:01.146448 2234 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.147173 kubelet[2234]: I0904 17:38:01.147146 2234 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:38:01.159094 kubelet[2234]: I0904 17:38:01.159064 2234 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:38:01.159344 kubelet[2234]: I0904 17:38:01.159307 2234 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:38:01.159492 kubelet[2234]: I0904 17:38:01.159467 2234 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:38:01.159878 kubelet[2234]: I0904 17:38:01.159853 2234 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:38:01.159878 kubelet[2234]: I0904 17:38:01.159868 2234 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:38:01.159999 kubelet[2234]: I0904 17:38:01.159979 2234 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:38:01.160094 kubelet[2234]: I0904 17:38:01.160074 2234 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:38:01.160094 kubelet[2234]: I0904 17:38:01.160089 2234 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:38:01.160156 kubelet[2234]: I0904 17:38:01.160115 2234 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:38:01.160156 kubelet[2234]: I0904 17:38:01.160131 2234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:38:01.161366 kubelet[2234]: I0904 17:38:01.161099 2234 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:38:01.162020 kubelet[2234]: W0904 17:38:01.161961 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.162020 kubelet[2234]: E0904 17:38:01.162006 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.162184 kubelet[2234]: W0904 17:38:01.162033 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.162184 kubelet[2234]: E0904 17:38:01.162058 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.163608 kubelet[2234]: I0904 17:38:01.163590 2234 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:38:01.163677 kubelet[2234]: W0904 17:38:01.163656 2234 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:38:01.164218 kubelet[2234]: I0904 17:38:01.164205 2234 server.go:1256] "Started kubelet" Sep 4 17:38:01.164274 kubelet[2234]: I0904 17:38:01.164255 2234 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:38:01.165231 kubelet[2234]: I0904 17:38:01.164565 2234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:38:01.165231 kubelet[2234]: I0904 17:38:01.164963 2234 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:38:01.165231 kubelet[2234]: I0904 17:38:01.164982 2234 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:38:01.165628 kubelet[2234]: I0904 17:38:01.165494 2234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:38:01.167815 kubelet[2234]: E0904 17:38:01.167800 2234 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:38:01.168355 kubelet[2234]: I0904 17:38:01.167853 2234 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:38:01.168355 kubelet[2234]: W0904 17:38:01.168118 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.168355 kubelet[2234]: I0904 17:38:01.167877 2234 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:38:01.168355 kubelet[2234]: E0904 17:38:01.168151 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.168355 kubelet[2234]: I0904 17:38:01.168192 2234 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:38:01.168670 kubelet[2234]: E0904 17:38:01.168629 2234 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:38:01.169272 kubelet[2234]: E0904 17:38:01.169220 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Sep 4 17:38:01.169604 kubelet[2234]: I0904 17:38:01.169571 2234 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:38:01.169800 kubelet[2234]: I0904 17:38:01.169677 2234 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:38:01.170011 kubelet[2234]: E0904 17:38:01.169988 2234 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21b2b90dbb4e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:38:01.164182754 +0000 UTC m=+0.414173865,LastTimestamp:2024-09-04 17:38:01.164182754 +0000 UTC m=+0.414173865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:38:01.170393 kubelet[2234]: I0904 17:38:01.170367 2234 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:38:01.183819 kubelet[2234]: I0904 17:38:01.183785 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:38:01.184955 kubelet[2234]: I0904 17:38:01.184583 2234 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:38:01.184955 kubelet[2234]: I0904 17:38:01.184610 2234 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:38:01.184955 kubelet[2234]: I0904 17:38:01.184625 2234 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:38:01.185520 kubelet[2234]: I0904 17:38:01.185189 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:38:01.185520 kubelet[2234]: I0904 17:38:01.185214 2234 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:38:01.185520 kubelet[2234]: I0904 17:38:01.185231 2234 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:38:01.185520 kubelet[2234]: E0904 17:38:01.185280 2234 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:38:01.269786 kubelet[2234]: I0904 17:38:01.269761 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:38:01.270133 kubelet[2234]: E0904 17:38:01.270102 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 4 17:38:01.286167 kubelet[2234]: E0904 17:38:01.286132 2234 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:38:01.369843 kubelet[2234]: E0904 17:38:01.369795 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Sep 4 17:38:01.472291 kubelet[2234]: I0904 17:38:01.472191 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:38:01.472582 kubelet[2234]: E0904 17:38:01.472553 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 4 17:38:01.486781 kubelet[2234]: E0904 17:38:01.486745 2234 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:38:01.753606 kubelet[2234]: W0904 17:38:01.753463 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.753606 kubelet[2234]: E0904 17:38:01.753530 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:01.771032 kubelet[2234]: E0904 17:38:01.771003 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Sep 4 17:38:01.867357 kubelet[2234]: I0904 17:38:01.867284 2234 policy_none.go:49] "None policy: Start" Sep 4 17:38:01.868152 kubelet[2234]: I0904 17:38:01.868127 2234 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:38:01.868217 kubelet[2234]: I0904 17:38:01.868166 2234 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:38:01.874351 kubelet[2234]: I0904 17:38:01.874299 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:38:01.874710 kubelet[2234]: E0904 17:38:01.874692 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 4 17:38:01.887746 kubelet[2234]: E0904 17:38:01.887731 2234 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:38:02.095397 kubelet[2234]: W0904 17:38:02.095210 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.095397 kubelet[2234]: E0904 17:38:02.095297 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.180393 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:38:02.195407 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:38:02.198473 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:38:02.210271 kubelet[2234]: I0904 17:38:02.210235 2234 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:38:02.210580 kubelet[2234]: I0904 17:38:02.210563 2234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:38:02.211935 kubelet[2234]: E0904 17:38:02.211917 2234 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:38:02.572365 kubelet[2234]: E0904 17:38:02.572219 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Sep 4 17:38:02.580833 kubelet[2234]: W0904 17:38:02.580794 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.580886 kubelet[2234]: E0904 17:38:02.580842 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.653317 kubelet[2234]: W0904 17:38:02.653253 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.653317 kubelet[2234]: E0904 17:38:02.653314 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.676533 kubelet[2234]: I0904 17:38:02.676503 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:38:02.676739 kubelet[2234]: E0904 17:38:02.676719 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 4 17:38:02.688883 kubelet[2234]: I0904 17:38:02.688846 2234 topology_manager.go:215] "Topology Admit Handler" podUID="4c18280e060422a631a89f8db1dc0bef" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:38:02.689673 kubelet[2234]: I0904 17:38:02.689649 2234 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:38:02.690412 kubelet[2234]: I0904 17:38:02.690386 2234 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:38:02.695901 systemd[1]: Created slice kubepods-burstable-pod4c18280e060422a631a89f8db1dc0bef.slice - libcontainer container kubepods-burstable-pod4c18280e060422a631a89f8db1dc0bef.slice. Sep 4 17:38:02.705787 systemd[1]: Created slice kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice - libcontainer container kubepods-burstable-pod7fa6213ac08f24a6b78f4cd3838d26c9.slice. Sep 4 17:38:02.709303 systemd[1]: Created slice kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice - libcontainer container kubepods-burstable-podd9ddd765c3b0fcde29edfee4da9578f6.slice. Sep 4 17:38:02.739390 kubelet[2234]: W0904 17:38:02.739319 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.739390 kubelet[2234]: E0904 17:38:02.739388 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:02.776690 kubelet[2234]: I0904 17:38:02.776659 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:38:02.776787 kubelet[2234]: I0904 17:38:02.776767 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c18280e060422a631a89f8db1dc0bef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c18280e060422a631a89f8db1dc0bef\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:38:02.776820 kubelet[2234]: I0904 17:38:02.776807 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:02.776845 kubelet[2234]: I0904 17:38:02.776835 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c18280e060422a631a89f8db1dc0bef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c18280e060422a631a89f8db1dc0bef\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:38:02.776874 kubelet[2234]: I0904 17:38:02.776853 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c18280e060422a631a89f8db1dc0bef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c18280e060422a631a89f8db1dc0bef\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:38:02.776902 kubelet[2234]: I0904 17:38:02.776875 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:02.776932 kubelet[2234]: I0904 17:38:02.776920 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:02.776957 kubelet[2234]: I0904 17:38:02.776948 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:02.777009 kubelet[2234]: I0904 17:38:02.776987 2234 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:03.003837 kubelet[2234]: E0904 17:38:03.003740 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:03.004533 containerd[1445]: time="2024-09-04T17:38:03.004489757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c18280e060422a631a89f8db1dc0bef,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:03.008686 kubelet[2234]: E0904 17:38:03.008669 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:03.009201 containerd[1445]: time="2024-09-04T17:38:03.009026507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:03.011240 kubelet[2234]: E0904 17:38:03.011210 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:03.011545 containerd[1445]: time="2024-09-04T17:38:03.011522496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:03.295409 kubelet[2234]: E0904 17:38:03.295294 2234 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:03.554674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1503191358.mount: Deactivated successfully. Sep 4 17:38:03.562548 containerd[1445]: time="2024-09-04T17:38:03.562507081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:38:03.563501 containerd[1445]: time="2024-09-04T17:38:03.563468063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:38:03.564416 containerd[1445]: time="2024-09-04T17:38:03.564385182Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:38:03.565437 containerd[1445]: time="2024-09-04T17:38:03.565300727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:38:03.566190 containerd[1445]: time="2024-09-04T17:38:03.566159534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:38:03.567001 containerd[1445]: time="2024-09-04T17:38:03.566964188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:38:03.567962 containerd[1445]: time="2024-09-04T17:38:03.567901856Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:38:03.571603 containerd[1445]: time="2024-09-04T17:38:03.571557566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:38:03.572407 containerd[1445]: time="2024-09-04T17:38:03.572373982Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.295536ms" Sep 4 17:38:03.573766 containerd[1445]: time="2024-09-04T17:38:03.573737372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.165408ms" Sep 4 17:38:03.575198 containerd[1445]: time="2024-09-04T17:38:03.575160705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.592301ms" Sep 4 17:38:03.740527 kubelet[2234]: W0904 17:38:03.740479 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:03.740527 kubelet[2234]: E0904 17:38:03.740524 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:03.923537 containerd[1445]: time="2024-09-04T17:38:03.923242317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:03.923537 containerd[1445]: time="2024-09-04T17:38:03.923303524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:03.923537 containerd[1445]: time="2024-09-04T17:38:03.923317250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:03.923537 containerd[1445]: time="2024-09-04T17:38:03.923033820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:03.923537 containerd[1445]: time="2024-09-04T17:38:03.923100607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:03.923537 containerd[1445]: time="2024-09-04T17:38:03.923135524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:03.923537 containerd[1445]: time="2024-09-04T17:38:03.923281822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:03.923991 containerd[1445]: time="2024-09-04T17:38:03.923424204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:03.924633 containerd[1445]: time="2024-09-04T17:38:03.924388402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:03.924633 containerd[1445]: time="2024-09-04T17:38:03.924437816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:03.924633 containerd[1445]: time="2024-09-04T17:38:03.924450831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:03.924633 containerd[1445]: time="2024-09-04T17:38:03.924541895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:03.986608 systemd[1]: Started cri-containerd-e02d812308936714275117c8ac2cb29d37b11e568cbc74c752325519bfdbdc8e.scope - libcontainer container e02d812308936714275117c8ac2cb29d37b11e568cbc74c752325519bfdbdc8e. Sep 4 17:38:03.992091 systemd[1]: Started cri-containerd-404ad3606afcf98ed15ba23abd126db64e07725ec6f427eed1a6789443476012.scope - libcontainer container 404ad3606afcf98ed15ba23abd126db64e07725ec6f427eed1a6789443476012. Sep 4 17:38:03.993691 systemd[1]: Started cri-containerd-bc19d282364924dbd768690a7ad907c8cd352b8ba026703b2cd2c5e886c53ce5.scope - libcontainer container bc19d282364924dbd768690a7ad907c8cd352b8ba026703b2cd2c5e886c53ce5. Sep 4 17:38:04.038831 containerd[1445]: time="2024-09-04T17:38:04.038764563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d9ddd765c3b0fcde29edfee4da9578f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e02d812308936714275117c8ac2cb29d37b11e568cbc74c752325519bfdbdc8e\"" Sep 4 17:38:04.040961 kubelet[2234]: E0904 17:38:04.040941 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:04.045220 containerd[1445]: time="2024-09-04T17:38:04.045187025Z" level=info msg="CreateContainer within sandbox \"e02d812308936714275117c8ac2cb29d37b11e568cbc74c752325519bfdbdc8e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:38:04.045435 containerd[1445]: time="2024-09-04T17:38:04.045405952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7fa6213ac08f24a6b78f4cd3838d26c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"404ad3606afcf98ed15ba23abd126db64e07725ec6f427eed1a6789443476012\"" Sep 4 17:38:04.046147 kubelet[2234]: E0904 17:38:04.046123 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:04.047964 containerd[1445]: time="2024-09-04T17:38:04.047929088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c18280e060422a631a89f8db1dc0bef,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc19d282364924dbd768690a7ad907c8cd352b8ba026703b2cd2c5e886c53ce5\"" Sep 4 17:38:04.048466 kubelet[2234]: E0904 17:38:04.048447 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:04.048725 containerd[1445]: time="2024-09-04T17:38:04.048698613Z" level=info msg="CreateContainer within sandbox \"404ad3606afcf98ed15ba23abd126db64e07725ec6f427eed1a6789443476012\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:38:04.050571 containerd[1445]: time="2024-09-04T17:38:04.050543136Z" level=info msg="CreateContainer within sandbox \"bc19d282364924dbd768690a7ad907c8cd352b8ba026703b2cd2c5e886c53ce5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:38:04.149219 containerd[1445]: time="2024-09-04T17:38:04.149166667Z" level=info msg="CreateContainer within sandbox \"e02d812308936714275117c8ac2cb29d37b11e568cbc74c752325519bfdbdc8e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d0b86829cbf26c318482d0bc9e9948c4bc919016da01f801b0e1d323b4e85c8a\"" Sep 4 17:38:04.149867 containerd[1445]: time="2024-09-04T17:38:04.149832415Z" level=info msg="StartContainer for \"d0b86829cbf26c318482d0bc9e9948c4bc919016da01f801b0e1d323b4e85c8a\"" Sep 4 17:38:04.172941 kubelet[2234]: E0904 17:38:04.172890 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="3.2s" Sep 4 17:38:04.176502 systemd[1]: Started cri-containerd-d0b86829cbf26c318482d0bc9e9948c4bc919016da01f801b0e1d323b4e85c8a.scope - libcontainer container d0b86829cbf26c318482d0bc9e9948c4bc919016da01f801b0e1d323b4e85c8a. Sep 4 17:38:04.205768 containerd[1445]: time="2024-09-04T17:38:04.205723075Z" level=info msg="CreateContainer within sandbox \"404ad3606afcf98ed15ba23abd126db64e07725ec6f427eed1a6789443476012\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da8b569cfe2ef59d361a9d8b50e31779ee44ff5afc9c99de6327492032923a4f\"" Sep 4 17:38:04.206209 containerd[1445]: time="2024-09-04T17:38:04.206080185Z" level=info msg="StartContainer for \"da8b569cfe2ef59d361a9d8b50e31779ee44ff5afc9c99de6327492032923a4f\"" Sep 4 17:38:04.242564 systemd[1]: Started cri-containerd-da8b569cfe2ef59d361a9d8b50e31779ee44ff5afc9c99de6327492032923a4f.scope - libcontainer container da8b569cfe2ef59d361a9d8b50e31779ee44ff5afc9c99de6327492032923a4f. Sep 4 17:38:04.255756 kubelet[2234]: W0904 17:38:04.255701 2234 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:04.255809 kubelet[2234]: E0904 17:38:04.255764 2234 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 4 17:38:04.259404 containerd[1445]: time="2024-09-04T17:38:04.259344454Z" level=info msg="CreateContainer within sandbox \"bc19d282364924dbd768690a7ad907c8cd352b8ba026703b2cd2c5e886c53ce5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58d03a517816421364b1dc52016d1988294225255c4b175dee384b85ee0e44f3\"" Sep 4 17:38:04.259504 containerd[1445]: time="2024-09-04T17:38:04.259368649Z" level=info msg="StartContainer for \"d0b86829cbf26c318482d0bc9e9948c4bc919016da01f801b0e1d323b4e85c8a\" returns successfully" Sep 4 17:38:04.260285 containerd[1445]: time="2024-09-04T17:38:04.260237294Z" level=info msg="StartContainer for \"58d03a517816421364b1dc52016d1988294225255c4b175dee384b85ee0e44f3\"" Sep 4 17:38:04.278710 kubelet[2234]: I0904 17:38:04.278527 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:38:04.278939 kubelet[2234]: E0904 17:38:04.278922 2234 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 4 17:38:04.381529 systemd[1]: Started cri-containerd-58d03a517816421364b1dc52016d1988294225255c4b175dee384b85ee0e44f3.scope - libcontainer container 58d03a517816421364b1dc52016d1988294225255c4b175dee384b85ee0e44f3. Sep 4 17:38:04.432505 containerd[1445]: time="2024-09-04T17:38:04.432032127Z" level=info msg="StartContainer for \"da8b569cfe2ef59d361a9d8b50e31779ee44ff5afc9c99de6327492032923a4f\" returns successfully" Sep 4 17:38:04.521061 containerd[1445]: time="2024-09-04T17:38:04.520990469Z" level=info msg="StartContainer for \"58d03a517816421364b1dc52016d1988294225255c4b175dee384b85ee0e44f3\" returns successfully" Sep 4 17:38:05.252986 kubelet[2234]: E0904 17:38:05.252848 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:05.258503 kubelet[2234]: E0904 17:38:05.258366 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:05.259896 kubelet[2234]: E0904 17:38:05.259868 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:06.029019 kubelet[2234]: E0904 17:38:06.028973 2234 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17f21b2b90dbb4e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:38:01.164182754 +0000 UTC m=+0.414173865,LastTimestamp:2024-09-04 17:38:01.164182754 +0000 UTC m=+0.414173865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:38:06.262276 kubelet[2234]: E0904 17:38:06.262212 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:06.262839 kubelet[2234]: E0904 17:38:06.262812 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:06.263419 kubelet[2234]: E0904 17:38:06.263370 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:06.337641 kubelet[2234]: E0904 17:38:06.337515 2234 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 17:38:06.698118 kubelet[2234]: E0904 17:38:06.698000 2234 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 17:38:07.133465 kubelet[2234]: E0904 17:38:07.133366 2234 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 17:38:07.263958 kubelet[2234]: E0904 17:38:07.263929 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:07.423573 kubelet[2234]: E0904 17:38:07.423462 2234 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:38:07.480074 kubelet[2234]: I0904 17:38:07.480055 2234 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:38:07.483038 kubelet[2234]: I0904 17:38:07.483020 2234 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:38:07.490092 kubelet[2234]: E0904 17:38:07.490053 2234 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:38:07.590596 kubelet[2234]: E0904 17:38:07.590556 2234 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:38:07.691330 kubelet[2234]: E0904 17:38:07.691217 2234 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:38:07.792117 kubelet[2234]: E0904 17:38:07.792071 2234 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:38:07.892591 kubelet[2234]: E0904 17:38:07.892548 2234 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:38:07.993252 kubelet[2234]: E0904 17:38:07.993112 2234 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:38:08.165610 kubelet[2234]: I0904 17:38:08.165547 2234 apiserver.go:52] "Watching apiserver" Sep 4 17:38:08.168273 kubelet[2234]: I0904 17:38:08.168214 2234 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:38:08.272120 kubelet[2234]: E0904 17:38:08.271998 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:08.325884 systemd[1]: Reloading requested from client PID 2518 ('systemctl') (unit session-9.scope)... Sep 4 17:38:08.325906 systemd[1]: Reloading... Sep 4 17:38:08.502613 update_engine[1435]: I0904 17:38:08.502538 1435 update_attempter.cc:509] Updating boot flags... Sep 4 17:38:08.505353 zram_generator::config[2559]: No configuration found. Sep 4 17:38:08.533020 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2582) Sep 4 17:38:08.562373 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2587) Sep 4 17:38:08.589364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2587) Sep 4 17:38:08.639028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:38:08.747538 systemd[1]: Reloading finished in 421 ms. Sep 4 17:38:08.829416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:38:08.848742 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:38:08.849018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:38:08.859812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:38:09.002999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:38:09.007536 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:38:09.052554 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:38:09.052554 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:38:09.052554 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:38:09.052955 kubelet[2615]: I0904 17:38:09.052591 2615 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:38:09.058056 kubelet[2615]: I0904 17:38:09.058025 2615 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:38:09.058056 kubelet[2615]: I0904 17:38:09.058048 2615 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:38:09.058423 kubelet[2615]: I0904 17:38:09.058407 2615 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:38:09.059874 kubelet[2615]: I0904 17:38:09.059856 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:38:09.063141 kubelet[2615]: I0904 17:38:09.063052 2615 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:38:09.072984 kubelet[2615]: I0904 17:38:09.072956 2615 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:38:09.074801 kubelet[2615]: I0904 17:38:09.073388 2615 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:38:09.074801 kubelet[2615]: I0904 17:38:09.073587 2615 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:38:09.074801 kubelet[2615]: I0904 17:38:09.073617 2615 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:38:09.074801 kubelet[2615]: I0904 17:38:09.073628 2615 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:38:09.074801 kubelet[2615]: I0904 17:38:09.073663 2615 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:38:09.074801 kubelet[2615]: I0904 17:38:09.073759 2615 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:38:09.075030 kubelet[2615]: I0904 17:38:09.073772 2615 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:38:09.075030 kubelet[2615]: I0904 17:38:09.073802 2615 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:38:09.075030 kubelet[2615]: I0904 17:38:09.073815 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:38:09.075564 kubelet[2615]: I0904 17:38:09.075544 2615 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:38:09.075819 kubelet[2615]: I0904 17:38:09.075806 2615 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:38:09.076519 kubelet[2615]: I0904 17:38:09.076408 2615 server.go:1256] "Started kubelet" Sep 4 17:38:09.077993 kubelet[2615]: I0904 17:38:09.077649 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:38:09.077993 kubelet[2615]: I0904 17:38:09.077964 2615 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:38:09.078056 kubelet[2615]: I0904 17:38:09.078021 2615 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:38:09.080561 kubelet[2615]: I0904 17:38:09.080482 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:38:09.082396 kubelet[2615]: I0904 17:38:09.082267 2615 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:38:09.089090 kubelet[2615]: I0904 17:38:09.088605 2615 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:38:09.089090 kubelet[2615]: I0904 17:38:09.088993 2615 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:38:09.089215 kubelet[2615]: I0904 17:38:09.089189 2615 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:38:09.091467 kubelet[2615]: I0904 17:38:09.091443 2615 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:38:09.093104 kubelet[2615]: I0904 17:38:09.092966 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:38:09.095175 kubelet[2615]: E0904 17:38:09.095150 2615 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:38:09.096815 kubelet[2615]: I0904 17:38:09.096647 2615 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:38:09.102960 kubelet[2615]: I0904 17:38:09.102910 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:38:09.104215 kubelet[2615]: I0904 17:38:09.104184 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:38:09.104262 kubelet[2615]: I0904 17:38:09.104225 2615 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:38:09.104262 kubelet[2615]: I0904 17:38:09.104246 2615 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:38:09.104321 kubelet[2615]: E0904 17:38:09.104313 2615 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:38:09.138165 kubelet[2615]: I0904 17:38:09.138130 2615 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:38:09.138165 kubelet[2615]: I0904 17:38:09.138150 2615 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:38:09.138165 kubelet[2615]: I0904 17:38:09.138172 2615 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:38:09.138457 kubelet[2615]: I0904 17:38:09.138359 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:38:09.138457 kubelet[2615]: I0904 17:38:09.138385 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:38:09.138457 kubelet[2615]: I0904 17:38:09.138392 2615 policy_none.go:49] "None policy: Start" Sep 4 17:38:09.139188 kubelet[2615]: I0904 17:38:09.138905 2615 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:38:09.139188 kubelet[2615]: I0904 17:38:09.138929 2615 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:38:09.139188 kubelet[2615]: I0904 17:38:09.139099 2615 state_mem.go:75] "Updated machine memory state" Sep 4 17:38:09.143206 kubelet[2615]: I0904 17:38:09.142873 2615 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:38:09.143206 kubelet[2615]: I0904 17:38:09.143096 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:38:09.193408 kubelet[2615]: I0904 17:38:09.193382 2615 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:38:09.197852 kubelet[2615]: I0904 17:38:09.197832 2615 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:38:09.198062 kubelet[2615]: I0904 17:38:09.198002 2615 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:38:09.204428 kubelet[2615]: I0904 17:38:09.204404 2615 topology_manager.go:215] "Topology Admit Handler" podUID="d9ddd765c3b0fcde29edfee4da9578f6" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:38:09.204513 kubelet[2615]: I0904 17:38:09.204464 2615 topology_manager.go:215] "Topology Admit Handler" podUID="4c18280e060422a631a89f8db1dc0bef" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:38:09.204513 kubelet[2615]: I0904 17:38:09.204494 2615 topology_manager.go:215] "Topology Admit Handler" podUID="7fa6213ac08f24a6b78f4cd3838d26c9" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:38:09.209323 kubelet[2615]: E0904 17:38:09.209272 2615 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:38:09.390532 kubelet[2615]: I0904 17:38:09.390401 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:09.390532 kubelet[2615]: I0904 17:38:09.390450 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:09.390532 kubelet[2615]: I0904 17:38:09.390481 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9ddd765c3b0fcde29edfee4da9578f6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d9ddd765c3b0fcde29edfee4da9578f6\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:38:09.390532 kubelet[2615]: I0904 17:38:09.390506 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c18280e060422a631a89f8db1dc0bef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c18280e060422a631a89f8db1dc0bef\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:38:09.390532 kubelet[2615]: I0904 17:38:09.390526 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:09.390778 kubelet[2615]: I0904 17:38:09.390594 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:09.390778 kubelet[2615]: I0904 17:38:09.390626 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c18280e060422a631a89f8db1dc0bef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c18280e060422a631a89f8db1dc0bef\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:38:09.390778 kubelet[2615]: I0904 17:38:09.390671 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c18280e060422a631a89f8db1dc0bef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c18280e060422a631a89f8db1dc0bef\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:38:09.390778 kubelet[2615]: I0904 17:38:09.390692 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fa6213ac08f24a6b78f4cd3838d26c9-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7fa6213ac08f24a6b78f4cd3838d26c9\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:38:09.509649 kubelet[2615]: E0904 17:38:09.509592 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:09.511532 kubelet[2615]: E0904 17:38:09.510021 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:09.511532 kubelet[2615]: E0904 17:38:09.510565 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:10.075587 kubelet[2615]: I0904 17:38:10.075528 2615 apiserver.go:52] "Watching apiserver" Sep 4 17:38:10.089807 kubelet[2615]: I0904 17:38:10.089749 2615 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:38:10.125544 kubelet[2615]: E0904 17:38:10.125490 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:10.126635 kubelet[2615]: E0904 17:38:10.126603 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:10.127152 kubelet[2615]: E0904 17:38:10.127120 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:10.208115 kubelet[2615]: I0904 17:38:10.207808 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.207750624 podStartE2EDuration="1.207750624s" podCreationTimestamp="2024-09-04 17:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:38:10.180909093 +0000 UTC m=+1.168977833" watchObservedRunningTime="2024-09-04 17:38:10.207750624 +0000 UTC m=+1.195819364" Sep 4 17:38:10.227139 kubelet[2615]: I0904 17:38:10.227100 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.227055512 podStartE2EDuration="2.227055512s" podCreationTimestamp="2024-09-04 17:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:38:10.208021237 +0000 UTC m=+1.196089977" watchObservedRunningTime="2024-09-04 17:38:10.227055512 +0000 UTC m=+1.215124252" Sep 4 17:38:10.245745 kubelet[2615]: I0904 17:38:10.245693 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.245608934 podStartE2EDuration="1.245608934s" podCreationTimestamp="2024-09-04 17:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:38:10.227716644 +0000 UTC m=+1.215785384" watchObservedRunningTime="2024-09-04 17:38:10.245608934 +0000 UTC m=+1.233677664" Sep 4 17:38:11.127152 kubelet[2615]: E0904 17:38:11.127109 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:13.568408 sudo[1645]: pam_unix(sudo:session): session closed for user root Sep 4 17:38:13.570068 sshd[1642]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:13.574603 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:58094.service: Deactivated successfully. Sep 4 17:38:13.576631 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:38:13.576823 systemd[1]: session-9.scope: Consumed 4.878s CPU time, 141.2M memory peak, 0B memory swap peak. Sep 4 17:38:13.577383 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:38:13.578373 systemd-logind[1432]: Removed session 9. Sep 4 17:38:15.604391 kubelet[2615]: E0904 17:38:15.604260 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:16.132278 kubelet[2615]: E0904 17:38:16.132234 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:17.532816 kubelet[2615]: E0904 17:38:17.532786 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:18.137927 kubelet[2615]: E0904 17:38:18.137875 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:19.139021 kubelet[2615]: E0904 17:38:19.138993 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:19.198492 kubelet[2615]: E0904 17:38:19.198456 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:20.140145 kubelet[2615]: E0904 17:38:20.140117 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:22.903645 kubelet[2615]: I0904 17:38:22.903247 2615 topology_manager.go:215] "Topology Admit Handler" podUID="808b69ae-ec6e-4c7d-b832-c3f19fa3c02a" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-pl7qh" Sep 4 17:38:22.911003 kubelet[2615]: I0904 17:38:22.910977 2615 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:38:22.911761 containerd[1445]: time="2024-09-04T17:38:22.911419449Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:38:22.912264 kubelet[2615]: I0904 17:38:22.911623 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:38:22.911529 systemd[1]: Created slice kubepods-besteffort-pod808b69ae_ec6e_4c7d_b832_c3f19fa3c02a.slice - libcontainer container kubepods-besteffort-pod808b69ae_ec6e_4c7d_b832_c3f19fa3c02a.slice. Sep 4 17:38:23.068103 kubelet[2615]: I0904 17:38:23.068050 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/808b69ae-ec6e-4c7d-b832-c3f19fa3c02a-var-lib-calico\") pod \"tigera-operator-5d56685c77-pl7qh\" (UID: \"808b69ae-ec6e-4c7d-b832-c3f19fa3c02a\") " pod="tigera-operator/tigera-operator-5d56685c77-pl7qh" Sep 4 17:38:23.068103 kubelet[2615]: I0904 17:38:23.068102 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wlnt\" (UniqueName: \"kubernetes.io/projected/808b69ae-ec6e-4c7d-b832-c3f19fa3c02a-kube-api-access-7wlnt\") pod \"tigera-operator-5d56685c77-pl7qh\" (UID: \"808b69ae-ec6e-4c7d-b832-c3f19fa3c02a\") " pod="tigera-operator/tigera-operator-5d56685c77-pl7qh" Sep 4 17:38:23.468927 kubelet[2615]: I0904 17:38:23.468880 2615 topology_manager.go:215] "Topology Admit Handler" podUID="6d76bd37-916a-49d2-936b-40ac9d57f190" podNamespace="kube-system" podName="kube-proxy-kg6jn" Sep 4 17:38:23.470020 kubelet[2615]: I0904 17:38:23.469984 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d76bd37-916a-49d2-936b-40ac9d57f190-kube-proxy\") pod \"kube-proxy-kg6jn\" (UID: \"6d76bd37-916a-49d2-936b-40ac9d57f190\") " pod="kube-system/kube-proxy-kg6jn" Sep 4 17:38:23.470020 kubelet[2615]: I0904 17:38:23.470029 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9pn\" (UniqueName: \"kubernetes.io/projected/6d76bd37-916a-49d2-936b-40ac9d57f190-kube-api-access-vp9pn\") pod \"kube-proxy-kg6jn\" (UID: \"6d76bd37-916a-49d2-936b-40ac9d57f190\") " pod="kube-system/kube-proxy-kg6jn" Sep 4 17:38:23.470212 kubelet[2615]: I0904 17:38:23.470048 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d76bd37-916a-49d2-936b-40ac9d57f190-lib-modules\") pod \"kube-proxy-kg6jn\" (UID: \"6d76bd37-916a-49d2-936b-40ac9d57f190\") " pod="kube-system/kube-proxy-kg6jn" Sep 4 17:38:23.470212 kubelet[2615]: I0904 17:38:23.470067 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d76bd37-916a-49d2-936b-40ac9d57f190-xtables-lock\") pod \"kube-proxy-kg6jn\" (UID: \"6d76bd37-916a-49d2-936b-40ac9d57f190\") " pod="kube-system/kube-proxy-kg6jn" Sep 4 17:38:23.474761 systemd[1]: Created slice kubepods-besteffort-pod6d76bd37_916a_49d2_936b_40ac9d57f190.slice - libcontainer container kubepods-besteffort-pod6d76bd37_916a_49d2_936b_40ac9d57f190.slice. Sep 4 17:38:23.522739 containerd[1445]: time="2024-09-04T17:38:23.522697464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-pl7qh,Uid:808b69ae-ec6e-4c7d-b832-c3f19fa3c02a,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:38:23.777130 kubelet[2615]: E0904 17:38:23.777067 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:23.777765 containerd[1445]: time="2024-09-04T17:38:23.777734027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kg6jn,Uid:6d76bd37-916a-49d2-936b-40ac9d57f190,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:23.895773 containerd[1445]: time="2024-09-04T17:38:23.895616932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:23.895773 containerd[1445]: time="2024-09-04T17:38:23.895714837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:23.895773 containerd[1445]: time="2024-09-04T17:38:23.895741246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:23.895974 containerd[1445]: time="2024-09-04T17:38:23.895896880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:23.919664 systemd[1]: Started cri-containerd-598d834183bb1c679a7622c83ba6d6cefb0ec4e79a744501354aeb923afb1498.scope - libcontainer container 598d834183bb1c679a7622c83ba6d6cefb0ec4e79a744501354aeb923afb1498. Sep 4 17:38:23.956811 containerd[1445]: time="2024-09-04T17:38:23.956742289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-pl7qh,Uid:808b69ae-ec6e-4c7d-b832-c3f19fa3c02a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"598d834183bb1c679a7622c83ba6d6cefb0ec4e79a744501354aeb923afb1498\"" Sep 4 17:38:23.958679 containerd[1445]: time="2024-09-04T17:38:23.958637669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:38:24.036905 containerd[1445]: time="2024-09-04T17:38:24.035540401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:24.036905 containerd[1445]: time="2024-09-04T17:38:24.036791128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:24.036905 containerd[1445]: time="2024-09-04T17:38:24.036804102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:24.037488 containerd[1445]: time="2024-09-04T17:38:24.036903368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:24.057542 systemd[1]: Started cri-containerd-5ed8e07b4f08377aacf47c3b5a149ad9c2f3ac94e4ac48b54f015c82d50cb4b0.scope - libcontainer container 5ed8e07b4f08377aacf47c3b5a149ad9c2f3ac94e4ac48b54f015c82d50cb4b0. Sep 4 17:38:24.081769 containerd[1445]: time="2024-09-04T17:38:24.081728443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kg6jn,Uid:6d76bd37-916a-49d2-936b-40ac9d57f190,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ed8e07b4f08377aacf47c3b5a149ad9c2f3ac94e4ac48b54f015c82d50cb4b0\"" Sep 4 17:38:24.082555 kubelet[2615]: E0904 17:38:24.082521 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:24.084494 containerd[1445]: time="2024-09-04T17:38:24.084449417Z" level=info msg="CreateContainer within sandbox \"5ed8e07b4f08377aacf47c3b5a149ad9c2f3ac94e4ac48b54f015c82d50cb4b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:38:24.346790 containerd[1445]: time="2024-09-04T17:38:24.346643805Z" level=info msg="CreateContainer within sandbox \"5ed8e07b4f08377aacf47c3b5a149ad9c2f3ac94e4ac48b54f015c82d50cb4b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63bbb9a105e088851958603da01f5f3f94fd8b9b1712dd3697d57cf6a8c737c8\"" Sep 4 17:38:24.347280 containerd[1445]: time="2024-09-04T17:38:24.347229157Z" level=info msg="StartContainer for \"63bbb9a105e088851958603da01f5f3f94fd8b9b1712dd3697d57cf6a8c737c8\"" Sep 4 17:38:24.384519 systemd[1]: Started cri-containerd-63bbb9a105e088851958603da01f5f3f94fd8b9b1712dd3697d57cf6a8c737c8.scope - libcontainer container 63bbb9a105e088851958603da01f5f3f94fd8b9b1712dd3697d57cf6a8c737c8. Sep 4 17:38:24.418497 containerd[1445]: time="2024-09-04T17:38:24.418433874Z" level=info msg="StartContainer for \"63bbb9a105e088851958603da01f5f3f94fd8b9b1712dd3697d57cf6a8c737c8\" returns successfully" Sep 4 17:38:25.157301 kubelet[2615]: E0904 17:38:25.156441 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:25.191173 kubelet[2615]: I0904 17:38:25.191110 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kg6jn" podStartSLOduration=2.191064395 podStartE2EDuration="2.191064395s" podCreationTimestamp="2024-09-04 17:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:38:25.18937257 +0000 UTC m=+16.177441330" watchObservedRunningTime="2024-09-04 17:38:25.191064395 +0000 UTC m=+16.179133155" Sep 4 17:38:25.287275 systemd[1]: run-containerd-runc-k8s.io-63bbb9a105e088851958603da01f5f3f94fd8b9b1712dd3697d57cf6a8c737c8-runc.TGqWa9.mount: Deactivated successfully. Sep 4 17:38:25.807113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629507072.mount: Deactivated successfully. Sep 4 17:38:26.157796 kubelet[2615]: E0904 17:38:26.157680 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:26.210691 containerd[1445]: time="2024-09-04T17:38:26.210619910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:26.228437 containerd[1445]: time="2024-09-04T17:38:26.228373951Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136545" Sep 4 17:38:26.258835 containerd[1445]: time="2024-09-04T17:38:26.258751785Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:26.275163 containerd[1445]: time="2024-09-04T17:38:26.275076405Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:26.276019 containerd[1445]: time="2024-09-04T17:38:26.275974936Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.31729137s" Sep 4 17:38:26.276079 containerd[1445]: time="2024-09-04T17:38:26.276017948Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:38:26.278234 containerd[1445]: time="2024-09-04T17:38:26.278186430Z" level=info msg="CreateContainer within sandbox \"598d834183bb1c679a7622c83ba6d6cefb0ec4e79a744501354aeb923afb1498\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:38:26.434491 containerd[1445]: time="2024-09-04T17:38:26.434305273Z" level=info msg="CreateContainer within sandbox \"598d834183bb1c679a7622c83ba6d6cefb0ec4e79a744501354aeb923afb1498\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"98111ae8676307be5fa17aa35fe558bf333d094d10e024a728c1f2f71112a4c4\"" Sep 4 17:38:26.435214 containerd[1445]: time="2024-09-04T17:38:26.435188275Z" level=info msg="StartContainer for \"98111ae8676307be5fa17aa35fe558bf333d094d10e024a728c1f2f71112a4c4\"" Sep 4 17:38:26.470497 systemd[1]: Started cri-containerd-98111ae8676307be5fa17aa35fe558bf333d094d10e024a728c1f2f71112a4c4.scope - libcontainer container 98111ae8676307be5fa17aa35fe558bf333d094d10e024a728c1f2f71112a4c4. Sep 4 17:38:26.530679 containerd[1445]: time="2024-09-04T17:38:26.530601985Z" level=info msg="StartContainer for \"98111ae8676307be5fa17aa35fe558bf333d094d10e024a728c1f2f71112a4c4\" returns successfully" Sep 4 17:38:27.170131 kubelet[2615]: I0904 17:38:27.170066 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-pl7qh" podStartSLOduration=2.85169337 podStartE2EDuration="5.170014178s" podCreationTimestamp="2024-09-04 17:38:22 +0000 UTC" firstStartedPulling="2024-09-04 17:38:23.958198493 +0000 UTC m=+14.946267233" lastFinishedPulling="2024-09-04 17:38:26.276519301 +0000 UTC m=+17.264588041" observedRunningTime="2024-09-04 17:38:27.169774598 +0000 UTC m=+18.157843338" watchObservedRunningTime="2024-09-04 17:38:27.170014178 +0000 UTC m=+18.158082928" Sep 4 17:38:29.344459 kubelet[2615]: I0904 17:38:29.344408 2615 topology_manager.go:215] "Topology Admit Handler" podUID="8729c25a-4672-4b51-873d-bcf5a975cb61" podNamespace="calico-system" podName="calico-typha-f45f6b778-p5sgq" Sep 4 17:38:29.365732 systemd[1]: Created slice kubepods-besteffort-pod8729c25a_4672_4b51_873d_bcf5a975cb61.slice - libcontainer container kubepods-besteffort-pod8729c25a_4672_4b51_873d_bcf5a975cb61.slice. Sep 4 17:38:29.399729 kubelet[2615]: I0904 17:38:29.399687 2615 topology_manager.go:215] "Topology Admit Handler" podUID="6740b4f1-3b99-4efa-a8a8-55e94fb92d97" podNamespace="calico-system" podName="calico-node-stthc" Sep 4 17:38:29.408412 systemd[1]: Created slice kubepods-besteffort-pod6740b4f1_3b99_4efa_a8a8_55e94fb92d97.slice - libcontainer container kubepods-besteffort-pod6740b4f1_3b99_4efa_a8a8_55e94fb92d97.slice. Sep 4 17:38:29.502748 kubelet[2615]: I0904 17:38:29.502680 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gj6f\" (UniqueName: \"kubernetes.io/projected/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-kube-api-access-6gj6f\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.502748 kubelet[2615]: I0904 17:38:29.502748 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkmtb\" (UniqueName: \"kubernetes.io/projected/8729c25a-4672-4b51-873d-bcf5a975cb61-kube-api-access-tkmtb\") pod \"calico-typha-f45f6b778-p5sgq\" (UID: \"8729c25a-4672-4b51-873d-bcf5a975cb61\") " pod="calico-system/calico-typha-f45f6b778-p5sgq" Sep 4 17:38:29.504040 kubelet[2615]: I0904 17:38:29.502819 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-lib-modules\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504040 kubelet[2615]: I0904 17:38:29.502886 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-tigera-ca-bundle\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504040 kubelet[2615]: I0904 17:38:29.502950 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-xtables-lock\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504040 kubelet[2615]: I0904 17:38:29.503066 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-cni-bin-dir\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504040 kubelet[2615]: I0904 17:38:29.503112 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-cni-log-dir\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504243 kubelet[2615]: I0904 17:38:29.503156 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-flexvol-driver-host\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504243 kubelet[2615]: I0904 17:38:29.503223 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-cni-net-dir\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504243 kubelet[2615]: I0904 17:38:29.503253 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8729c25a-4672-4b51-873d-bcf5a975cb61-tigera-ca-bundle\") pod \"calico-typha-f45f6b778-p5sgq\" (UID: \"8729c25a-4672-4b51-873d-bcf5a975cb61\") " pod="calico-system/calico-typha-f45f6b778-p5sgq" Sep 4 17:38:29.504243 kubelet[2615]: I0904 17:38:29.503280 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8729c25a-4672-4b51-873d-bcf5a975cb61-typha-certs\") pod \"calico-typha-f45f6b778-p5sgq\" (UID: \"8729c25a-4672-4b51-873d-bcf5a975cb61\") " pod="calico-system/calico-typha-f45f6b778-p5sgq" Sep 4 17:38:29.504243 kubelet[2615]: I0904 17:38:29.503308 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-var-lib-calico\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504440 kubelet[2615]: I0904 17:38:29.503408 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-policysync\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504440 kubelet[2615]: I0904 17:38:29.503457 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-var-run-calico\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.504440 kubelet[2615]: I0904 17:38:29.503490 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6740b4f1-3b99-4efa-a8a8-55e94fb92d97-node-certs\") pod \"calico-node-stthc\" (UID: \"6740b4f1-3b99-4efa-a8a8-55e94fb92d97\") " pod="calico-system/calico-node-stthc" Sep 4 17:38:29.546743 kubelet[2615]: I0904 17:38:29.546679 2615 topology_manager.go:215] "Topology Admit Handler" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" podNamespace="calico-system" podName="csi-node-driver-8vp2t" Sep 4 17:38:29.548500 kubelet[2615]: E0904 17:38:29.546996 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:29.609253 kubelet[2615]: E0904 17:38:29.608372 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.610005 kubelet[2615]: W0904 17:38:29.609326 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.610005 kubelet[2615]: E0904 17:38:29.609908 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.610398 kubelet[2615]: E0904 17:38:29.610384 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.610499 kubelet[2615]: W0904 17:38:29.610485 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.610592 kubelet[2615]: E0904 17:38:29.610580 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.610968 kubelet[2615]: E0904 17:38:29.610944 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.611057 kubelet[2615]: W0904 17:38:29.611043 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.611146 kubelet[2615]: E0904 17:38:29.611135 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.613518 kubelet[2615]: E0904 17:38:29.613466 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.613518 kubelet[2615]: W0904 17:38:29.613480 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.613518 kubelet[2615]: E0904 17:38:29.613501 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.613829 kubelet[2615]: E0904 17:38:29.613756 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.613829 kubelet[2615]: W0904 17:38:29.613770 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.613829 kubelet[2615]: E0904 17:38:29.613785 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.615168 kubelet[2615]: E0904 17:38:29.615015 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.615168 kubelet[2615]: W0904 17:38:29.615025 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.615168 kubelet[2615]: E0904 17:38:29.615043 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.619403 kubelet[2615]: E0904 17:38:29.619318 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.619403 kubelet[2615]: W0904 17:38:29.619382 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.619532 kubelet[2615]: E0904 17:38:29.619412 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.619638 kubelet[2615]: E0904 17:38:29.619626 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.619671 kubelet[2615]: W0904 17:38:29.619637 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.619671 kubelet[2615]: E0904 17:38:29.619668 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.621514 kubelet[2615]: E0904 17:38:29.621474 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.621514 kubelet[2615]: W0904 17:38:29.621506 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.621616 kubelet[2615]: E0904 17:38:29.621532 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.673734 kubelet[2615]: E0904 17:38:29.673646 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:29.675139 containerd[1445]: time="2024-09-04T17:38:29.674635031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f45f6b778-p5sgq,Uid:8729c25a-4672-4b51-873d-bcf5a975cb61,Namespace:calico-system,Attempt:0,}" Sep 4 17:38:29.705378 kubelet[2615]: E0904 17:38:29.705307 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.705378 kubelet[2615]: W0904 17:38:29.705356 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.705378 kubelet[2615]: E0904 17:38:29.705382 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.705581 kubelet[2615]: I0904 17:38:29.705425 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3d622273-42a7-410b-a788-c97fd7c8d977-socket-dir\") pod \"csi-node-driver-8vp2t\" (UID: \"3d622273-42a7-410b-a788-c97fd7c8d977\") " pod="calico-system/csi-node-driver-8vp2t" Sep 4 17:38:29.705703 kubelet[2615]: E0904 17:38:29.705669 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.705703 kubelet[2615]: W0904 17:38:29.705686 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.705785 kubelet[2615]: E0904 17:38:29.705712 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.705785 kubelet[2615]: I0904 17:38:29.705739 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d622273-42a7-410b-a788-c97fd7c8d977-kubelet-dir\") pod \"csi-node-driver-8vp2t\" (UID: \"3d622273-42a7-410b-a788-c97fd7c8d977\") " pod="calico-system/csi-node-driver-8vp2t" Sep 4 17:38:29.705972 kubelet[2615]: E0904 17:38:29.705956 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.705972 kubelet[2615]: W0904 17:38:29.705968 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.706047 kubelet[2615]: E0904 17:38:29.705980 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.706047 kubelet[2615]: I0904 17:38:29.706000 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3d622273-42a7-410b-a788-c97fd7c8d977-varrun\") pod \"csi-node-driver-8vp2t\" (UID: \"3d622273-42a7-410b-a788-c97fd7c8d977\") " pod="calico-system/csi-node-driver-8vp2t" Sep 4 17:38:29.706310 kubelet[2615]: E0904 17:38:29.706297 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.706310 kubelet[2615]: W0904 17:38:29.706307 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.706418 kubelet[2615]: E0904 17:38:29.706324 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.706418 kubelet[2615]: I0904 17:38:29.706367 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3d622273-42a7-410b-a788-c97fd7c8d977-registration-dir\") pod \"csi-node-driver-8vp2t\" (UID: \"3d622273-42a7-410b-a788-c97fd7c8d977\") " pod="calico-system/csi-node-driver-8vp2t" Sep 4 17:38:29.706698 kubelet[2615]: E0904 17:38:29.706663 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.706698 kubelet[2615]: W0904 17:38:29.706695 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.706874 kubelet[2615]: E0904 17:38:29.706725 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.707095 kubelet[2615]: E0904 17:38:29.706957 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.707095 kubelet[2615]: W0904 17:38:29.706974 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.707095 kubelet[2615]: E0904 17:38:29.706991 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.707350 kubelet[2615]: E0904 17:38:29.707305 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.707387 kubelet[2615]: W0904 17:38:29.707331 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.707387 kubelet[2615]: E0904 17:38:29.707380 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.707679 kubelet[2615]: E0904 17:38:29.707663 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.707719 kubelet[2615]: W0904 17:38:29.707677 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.707749 kubelet[2615]: E0904 17:38:29.707717 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.707929 kubelet[2615]: E0904 17:38:29.707901 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.707929 kubelet[2615]: W0904 17:38:29.707914 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.707991 kubelet[2615]: E0904 17:38:29.707961 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.708170 kubelet[2615]: E0904 17:38:29.708139 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.708170 kubelet[2615]: W0904 17:38:29.708151 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.708243 kubelet[2615]: E0904 17:38:29.708184 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.708243 kubelet[2615]: I0904 17:38:29.708210 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lb89\" (UniqueName: \"kubernetes.io/projected/3d622273-42a7-410b-a788-c97fd7c8d977-kube-api-access-2lb89\") pod \"csi-node-driver-8vp2t\" (UID: \"3d622273-42a7-410b-a788-c97fd7c8d977\") " pod="calico-system/csi-node-driver-8vp2t" Sep 4 17:38:29.708449 kubelet[2615]: E0904 17:38:29.708434 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.708449 kubelet[2615]: W0904 17:38:29.708448 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.708540 kubelet[2615]: E0904 17:38:29.708487 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.708762 kubelet[2615]: E0904 17:38:29.708747 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.708762 kubelet[2615]: W0904 17:38:29.708760 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.708816 kubelet[2615]: E0904 17:38:29.708776 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.709161 kubelet[2615]: E0904 17:38:29.709040 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.709161 kubelet[2615]: W0904 17:38:29.709054 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.709161 kubelet[2615]: E0904 17:38:29.709075 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.709566 kubelet[2615]: E0904 17:38:29.709452 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.709566 kubelet[2615]: W0904 17:38:29.709463 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.709566 kubelet[2615]: E0904 17:38:29.709476 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.709759 kubelet[2615]: E0904 17:38:29.709725 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.709837 kubelet[2615]: W0904 17:38:29.709801 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.709837 kubelet[2615]: E0904 17:38:29.709822 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.711854 kubelet[2615]: E0904 17:38:29.711834 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:29.712420 containerd[1445]: time="2024-09-04T17:38:29.712327297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-stthc,Uid:6740b4f1-3b99-4efa-a8a8-55e94fb92d97,Namespace:calico-system,Attempt:0,}" Sep 4 17:38:29.809759 kubelet[2615]: E0904 17:38:29.809717 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.809759 kubelet[2615]: W0904 17:38:29.809740 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.809759 kubelet[2615]: E0904 17:38:29.809761 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.810143 kubelet[2615]: E0904 17:38:29.810101 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.810143 kubelet[2615]: W0904 17:38:29.810126 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.810310 kubelet[2615]: E0904 17:38:29.810164 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.810509 kubelet[2615]: E0904 17:38:29.810480 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.810509 kubelet[2615]: W0904 17:38:29.810496 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.810599 kubelet[2615]: E0904 17:38:29.810519 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.810752 kubelet[2615]: E0904 17:38:29.810735 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.810752 kubelet[2615]: W0904 17:38:29.810749 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.810810 kubelet[2615]: E0904 17:38:29.810765 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.811154 kubelet[2615]: E0904 17:38:29.811121 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.811206 kubelet[2615]: W0904 17:38:29.811152 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.811206 kubelet[2615]: E0904 17:38:29.811187 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.811434 kubelet[2615]: E0904 17:38:29.811416 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.811434 kubelet[2615]: W0904 17:38:29.811430 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.811500 kubelet[2615]: E0904 17:38:29.811453 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.811693 kubelet[2615]: E0904 17:38:29.811669 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.811693 kubelet[2615]: W0904 17:38:29.811684 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.811753 kubelet[2615]: E0904 17:38:29.811718 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.811899 kubelet[2615]: E0904 17:38:29.811884 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.811899 kubelet[2615]: W0904 17:38:29.811894 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.811972 kubelet[2615]: E0904 17:38:29.811939 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.812132 kubelet[2615]: E0904 17:38:29.812114 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.812132 kubelet[2615]: W0904 17:38:29.812128 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.812187 kubelet[2615]: E0904 17:38:29.812159 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.812359 kubelet[2615]: E0904 17:38:29.812330 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.812359 kubelet[2615]: W0904 17:38:29.812356 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.812409 kubelet[2615]: E0904 17:38:29.812381 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.812577 kubelet[2615]: E0904 17:38:29.812559 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.812577 kubelet[2615]: W0904 17:38:29.812572 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.812698 kubelet[2615]: E0904 17:38:29.812606 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.812780 kubelet[2615]: E0904 17:38:29.812765 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.812780 kubelet[2615]: W0904 17:38:29.812776 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.812821 kubelet[2615]: E0904 17:38:29.812792 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.813098 kubelet[2615]: E0904 17:38:29.813082 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.813098 kubelet[2615]: W0904 17:38:29.813094 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.813171 kubelet[2615]: E0904 17:38:29.813113 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.813370 kubelet[2615]: E0904 17:38:29.813327 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.813370 kubelet[2615]: W0904 17:38:29.813367 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.813430 kubelet[2615]: E0904 17:38:29.813387 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.813621 kubelet[2615]: E0904 17:38:29.813603 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.813621 kubelet[2615]: W0904 17:38:29.813616 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.813683 kubelet[2615]: E0904 17:38:29.813639 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.813863 kubelet[2615]: E0904 17:38:29.813849 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.813863 kubelet[2615]: W0904 17:38:29.813859 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.813938 kubelet[2615]: E0904 17:38:29.813891 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.814104 kubelet[2615]: E0904 17:38:29.814089 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.814104 kubelet[2615]: W0904 17:38:29.814100 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.814187 kubelet[2615]: E0904 17:38:29.814139 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.814362 kubelet[2615]: E0904 17:38:29.814325 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.814362 kubelet[2615]: W0904 17:38:29.814359 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.814413 kubelet[2615]: E0904 17:38:29.814389 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.814610 kubelet[2615]: E0904 17:38:29.814594 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.814610 kubelet[2615]: W0904 17:38:29.814607 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.814667 kubelet[2615]: E0904 17:38:29.814626 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.814859 kubelet[2615]: E0904 17:38:29.814845 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.814859 kubelet[2615]: W0904 17:38:29.814855 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.814906 kubelet[2615]: E0904 17:38:29.814870 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.815127 kubelet[2615]: E0904 17:38:29.815110 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.815127 kubelet[2615]: W0904 17:38:29.815123 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.815186 kubelet[2615]: E0904 17:38:29.815144 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.815486 kubelet[2615]: E0904 17:38:29.815468 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.815486 kubelet[2615]: W0904 17:38:29.815484 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.815551 kubelet[2615]: E0904 17:38:29.815506 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.815746 kubelet[2615]: E0904 17:38:29.815728 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.815746 kubelet[2615]: W0904 17:38:29.815741 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.815809 kubelet[2615]: E0904 17:38:29.815771 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.816041 kubelet[2615]: E0904 17:38:29.816023 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.816041 kubelet[2615]: W0904 17:38:29.816037 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.816100 kubelet[2615]: E0904 17:38:29.816050 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.851883 kubelet[2615]: E0904 17:38:29.851650 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.851883 kubelet[2615]: W0904 17:38:29.851686 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.851883 kubelet[2615]: E0904 17:38:29.851709 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:29.914561 kubelet[2615]: E0904 17:38:29.913809 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:29.914561 kubelet[2615]: W0904 17:38:29.913841 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:29.914561 kubelet[2615]: E0904 17:38:29.913924 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:30.015311 kubelet[2615]: E0904 17:38:30.015275 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:30.015311 kubelet[2615]: W0904 17:38:30.015299 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:30.015311 kubelet[2615]: E0904 17:38:30.015324 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:30.058877 kubelet[2615]: E0904 17:38:30.058832 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:38:30.058877 kubelet[2615]: W0904 17:38:30.058857 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:38:30.058877 kubelet[2615]: E0904 17:38:30.058881 2615 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:38:30.473062 containerd[1445]: time="2024-09-04T17:38:30.472788254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:30.473062 containerd[1445]: time="2024-09-04T17:38:30.472876429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:30.473062 containerd[1445]: time="2024-09-04T17:38:30.472977961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:30.476801 containerd[1445]: time="2024-09-04T17:38:30.473091433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:30.490323 containerd[1445]: time="2024-09-04T17:38:30.489911570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:38:30.490323 containerd[1445]: time="2024-09-04T17:38:30.489962636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:38:30.490323 containerd[1445]: time="2024-09-04T17:38:30.489973256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:30.490323 containerd[1445]: time="2024-09-04T17:38:30.490054478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:38:30.501132 systemd[1]: Started cri-containerd-dae634604e921837976c8142e0a64b579f84b14a1f0d44e72b14928aca3fdd4b.scope - libcontainer container dae634604e921837976c8142e0a64b579f84b14a1f0d44e72b14928aca3fdd4b. Sep 4 17:38:30.504570 systemd[1]: Started cri-containerd-b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890.scope - libcontainer container b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890. Sep 4 17:38:30.534801 containerd[1445]: time="2024-09-04T17:38:30.534286722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-stthc,Uid:6740b4f1-3b99-4efa-a8a8-55e94fb92d97,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890\"" Sep 4 17:38:30.538782 kubelet[2615]: E0904 17:38:30.538712 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:30.548575 containerd[1445]: time="2024-09-04T17:38:30.548524710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:38:30.554090 containerd[1445]: time="2024-09-04T17:38:30.553960273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f45f6b778-p5sgq,Uid:8729c25a-4672-4b51-873d-bcf5a975cb61,Namespace:calico-system,Attempt:0,} returns sandbox id \"dae634604e921837976c8142e0a64b579f84b14a1f0d44e72b14928aca3fdd4b\"" Sep 4 17:38:30.554997 kubelet[2615]: E0904 17:38:30.554938 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:31.106039 kubelet[2615]: E0904 17:38:31.105901 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:31.998960 containerd[1445]: time="2024-09-04T17:38:31.998846696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:31.999940 containerd[1445]: time="2024-09-04T17:38:31.999888356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:38:32.001069 containerd[1445]: time="2024-09-04T17:38:32.001004995Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:32.003732 containerd[1445]: time="2024-09-04T17:38:32.003690314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:32.004561 containerd[1445]: time="2024-09-04T17:38:32.004521226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.455950719s" Sep 4 17:38:32.004613 containerd[1445]: time="2024-09-04T17:38:32.004562865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:38:32.005205 containerd[1445]: time="2024-09-04T17:38:32.005174263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:38:32.007765 containerd[1445]: time="2024-09-04T17:38:32.007729098Z" level=info msg="CreateContainer within sandbox \"b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:38:32.026991 containerd[1445]: time="2024-09-04T17:38:32.026916603Z" level=info msg="CreateContainer within sandbox \"b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742\"" Sep 4 17:38:32.027751 containerd[1445]: time="2024-09-04T17:38:32.027697852Z" level=info msg="StartContainer for \"5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742\"" Sep 4 17:38:32.060502 systemd[1]: Started cri-containerd-5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742.scope - libcontainer container 5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742. Sep 4 17:38:32.107658 systemd[1]: cri-containerd-5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742.scope: Deactivated successfully. Sep 4 17:38:32.114482 containerd[1445]: time="2024-09-04T17:38:32.114406388Z" level=info msg="StartContainer for \"5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742\" returns successfully" Sep 4 17:38:32.177510 kubelet[2615]: E0904 17:38:32.177470 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:32.328959 containerd[1445]: time="2024-09-04T17:38:32.326146373Z" level=info msg="shim disconnected" id=5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742 namespace=k8s.io Sep 4 17:38:32.328959 containerd[1445]: time="2024-09-04T17:38:32.328868751Z" level=warning msg="cleaning up after shim disconnected" id=5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742 namespace=k8s.io Sep 4 17:38:32.328959 containerd[1445]: time="2024-09-04T17:38:32.328882417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:33.023202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d67ff247c7d8ad53af3f60b91178c5db831b6bc299ebee315fd024939fdf742-rootfs.mount: Deactivated successfully. Sep 4 17:38:33.104746 kubelet[2615]: E0904 17:38:33.104685 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:33.180646 kubelet[2615]: E0904 17:38:33.180613 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:34.870030 containerd[1445]: time="2024-09-04T17:38:34.869965934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:34.871454 containerd[1445]: time="2024-09-04T17:38:34.871373580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:38:34.872541 containerd[1445]: time="2024-09-04T17:38:34.872502041Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:34.874689 containerd[1445]: time="2024-09-04T17:38:34.874659276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:34.875290 containerd[1445]: time="2024-09-04T17:38:34.875247812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.870041007s" Sep 4 17:38:34.875290 containerd[1445]: time="2024-09-04T17:38:34.875284231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:38:34.876714 containerd[1445]: time="2024-09-04T17:38:34.876164155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:38:34.887554 containerd[1445]: time="2024-09-04T17:38:34.887498469Z" level=info msg="CreateContainer within sandbox \"dae634604e921837976c8142e0a64b579f84b14a1f0d44e72b14928aca3fdd4b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:38:34.902264 containerd[1445]: time="2024-09-04T17:38:34.902198007Z" level=info msg="CreateContainer within sandbox \"dae634604e921837976c8142e0a64b579f84b14a1f0d44e72b14928aca3fdd4b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa3b17b2e37505bd546efc7b57e9dc125fb42cb1300eff00f2f965d4e28ef9d1\"" Sep 4 17:38:34.902793 containerd[1445]: time="2024-09-04T17:38:34.902750175Z" level=info msg="StartContainer for \"aa3b17b2e37505bd546efc7b57e9dc125fb42cb1300eff00f2f965d4e28ef9d1\"" Sep 4 17:38:34.932574 systemd[1]: Started cri-containerd-aa3b17b2e37505bd546efc7b57e9dc125fb42cb1300eff00f2f965d4e28ef9d1.scope - libcontainer container aa3b17b2e37505bd546efc7b57e9dc125fb42cb1300eff00f2f965d4e28ef9d1. Sep 4 17:38:34.980160 containerd[1445]: time="2024-09-04T17:38:34.980117723Z" level=info msg="StartContainer for \"aa3b17b2e37505bd546efc7b57e9dc125fb42cb1300eff00f2f965d4e28ef9d1\" returns successfully" Sep 4 17:38:35.105484 kubelet[2615]: E0904 17:38:35.105424 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:35.186098 kubelet[2615]: E0904 17:38:35.185066 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:35.203136 kubelet[2615]: I0904 17:38:35.203093 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-f45f6b778-p5sgq" podStartSLOduration=1.8828699530000002 podStartE2EDuration="6.203043787s" podCreationTimestamp="2024-09-04 17:38:29 +0000 UTC" firstStartedPulling="2024-09-04 17:38:30.555565242 +0000 UTC m=+21.543633982" lastFinishedPulling="2024-09-04 17:38:34.875739086 +0000 UTC m=+25.863807816" observedRunningTime="2024-09-04 17:38:35.194222079 +0000 UTC m=+26.182290839" watchObservedRunningTime="2024-09-04 17:38:35.203043787 +0000 UTC m=+26.191112527" Sep 4 17:38:36.186569 kubelet[2615]: E0904 17:38:36.186523 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:37.105169 kubelet[2615]: E0904 17:38:37.105109 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:37.190668 kubelet[2615]: E0904 17:38:37.190618 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:39.106157 kubelet[2615]: E0904 17:38:39.106003 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:41.104875 kubelet[2615]: E0904 17:38:41.104814 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:41.473624 containerd[1445]: time="2024-09-04T17:38:41.473450109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:41.482019 containerd[1445]: time="2024-09-04T17:38:41.481912973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:38:41.497431 containerd[1445]: time="2024-09-04T17:38:41.497370232Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:41.508810 containerd[1445]: time="2024-09-04T17:38:41.508740015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:41.509711 containerd[1445]: time="2024-09-04T17:38:41.509681974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 6.633482172s" Sep 4 17:38:41.509778 containerd[1445]: time="2024-09-04T17:38:41.509717080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:38:41.511596 containerd[1445]: time="2024-09-04T17:38:41.511550253Z" level=info msg="CreateContainer within sandbox \"b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:38:41.677402 containerd[1445]: time="2024-09-04T17:38:41.677316519Z" level=info msg="CreateContainer within sandbox \"b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1\"" Sep 4 17:38:41.677923 containerd[1445]: time="2024-09-04T17:38:41.677894665Z" level=info msg="StartContainer for \"df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1\"" Sep 4 17:38:41.718076 systemd[1]: Started cri-containerd-df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1.scope - libcontainer container df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1. Sep 4 17:38:41.912606 containerd[1445]: time="2024-09-04T17:38:41.912541531Z" level=info msg="StartContainer for \"df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1\" returns successfully" Sep 4 17:38:42.206138 kubelet[2615]: E0904 17:38:42.206001 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:42.472758 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:44064.service - OpenSSH per-connection server daemon (10.0.0.1:44064). Sep 4 17:38:42.543705 sshd[3306]: Accepted publickey for core from 10.0.0.1 port 44064 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:38:42.545945 sshd[3306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:42.553970 systemd-logind[1432]: New session 10 of user core. Sep 4 17:38:42.558495 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:38:42.747385 sshd[3306]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:42.752446 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:38:42.753192 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:44064.service: Deactivated successfully. Sep 4 17:38:42.755597 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:38:42.756625 systemd-logind[1432]: Removed session 10. Sep 4 17:38:43.105058 kubelet[2615]: E0904 17:38:43.104921 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:43.207268 kubelet[2615]: E0904 17:38:43.207237 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:44.193206 systemd[1]: cri-containerd-df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1.scope: Deactivated successfully. Sep 4 17:38:44.206529 kubelet[2615]: I0904 17:38:44.206489 2615 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:38:44.217874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1-rootfs.mount: Deactivated successfully. Sep 4 17:38:44.234044 kubelet[2615]: I0904 17:38:44.233996 2615 topology_manager.go:215] "Topology Admit Handler" podUID="0b703636-a98c-4502-8b45-5e98626c26a6" podNamespace="kube-system" podName="coredns-76f75df574-v7vtb" Sep 4 17:38:44.234696 kubelet[2615]: I0904 17:38:44.234224 2615 topology_manager.go:215] "Topology Admit Handler" podUID="97fc0f1c-4d49-47c0-a204-0725392f4861" podNamespace="kube-system" podName="coredns-76f75df574-hf6kl" Sep 4 17:38:44.234696 kubelet[2615]: I0904 17:38:44.234481 2615 topology_manager.go:215] "Topology Admit Handler" podUID="2c22d79b-aa3b-471c-9579-6849476d0d1d" podNamespace="calico-system" podName="calico-kube-controllers-77f499bdf-lsbg2" Sep 4 17:38:44.242430 systemd[1]: Created slice kubepods-burstable-pod97fc0f1c_4d49_47c0_a204_0725392f4861.slice - libcontainer container kubepods-burstable-pod97fc0f1c_4d49_47c0_a204_0725392f4861.slice. Sep 4 17:38:44.246404 systemd[1]: Created slice kubepods-burstable-pod0b703636_a98c_4502_8b45_5e98626c26a6.slice - libcontainer container kubepods-burstable-pod0b703636_a98c_4502_8b45_5e98626c26a6.slice. Sep 4 17:38:44.250489 systemd[1]: Created slice kubepods-besteffort-pod2c22d79b_aa3b_471c_9579_6849476d0d1d.slice - libcontainer container kubepods-besteffort-pod2c22d79b_aa3b_471c_9579_6849476d0d1d.slice. Sep 4 17:38:44.326818 kubelet[2615]: I0904 17:38:44.326755 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c22d79b-aa3b-471c-9579-6849476d0d1d-tigera-ca-bundle\") pod \"calico-kube-controllers-77f499bdf-lsbg2\" (UID: \"2c22d79b-aa3b-471c-9579-6849476d0d1d\") " pod="calico-system/calico-kube-controllers-77f499bdf-lsbg2" Sep 4 17:38:44.326818 kubelet[2615]: I0904 17:38:44.326803 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b703636-a98c-4502-8b45-5e98626c26a6-config-volume\") pod \"coredns-76f75df574-v7vtb\" (UID: \"0b703636-a98c-4502-8b45-5e98626c26a6\") " pod="kube-system/coredns-76f75df574-v7vtb" Sep 4 17:38:44.326818 kubelet[2615]: I0904 17:38:44.326825 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97fc0f1c-4d49-47c0-a204-0725392f4861-config-volume\") pod \"coredns-76f75df574-hf6kl\" (UID: \"97fc0f1c-4d49-47c0-a204-0725392f4861\") " pod="kube-system/coredns-76f75df574-hf6kl" Sep 4 17:38:44.327098 kubelet[2615]: I0904 17:38:44.326851 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnqr\" (UniqueName: \"kubernetes.io/projected/97fc0f1c-4d49-47c0-a204-0725392f4861-kube-api-access-9cnqr\") pod \"coredns-76f75df574-hf6kl\" (UID: \"97fc0f1c-4d49-47c0-a204-0725392f4861\") " pod="kube-system/coredns-76f75df574-hf6kl" Sep 4 17:38:44.327098 kubelet[2615]: I0904 17:38:44.327044 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xg5m\" (UniqueName: \"kubernetes.io/projected/2c22d79b-aa3b-471c-9579-6849476d0d1d-kube-api-access-9xg5m\") pod \"calico-kube-controllers-77f499bdf-lsbg2\" (UID: \"2c22d79b-aa3b-471c-9579-6849476d0d1d\") " pod="calico-system/calico-kube-controllers-77f499bdf-lsbg2" Sep 4 17:38:44.327149 kubelet[2615]: I0904 17:38:44.327108 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbzh6\" (UniqueName: \"kubernetes.io/projected/0b703636-a98c-4502-8b45-5e98626c26a6-kube-api-access-xbzh6\") pod \"coredns-76f75df574-v7vtb\" (UID: \"0b703636-a98c-4502-8b45-5e98626c26a6\") " pod="kube-system/coredns-76f75df574-v7vtb" Sep 4 17:38:44.344829 containerd[1445]: time="2024-09-04T17:38:44.344760248Z" level=info msg="shim disconnected" id=df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1 namespace=k8s.io Sep 4 17:38:44.344829 containerd[1445]: time="2024-09-04T17:38:44.344821624Z" level=warning msg="cleaning up after shim disconnected" id=df22fb72aa71b44aa22ac2fba7025908d0bfb510084a64e88831ca49417ba9e1 namespace=k8s.io Sep 4 17:38:44.344829 containerd[1445]: time="2024-09-04T17:38:44.344829739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:38:44.565708 kubelet[2615]: E0904 17:38:44.565647 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:44.566562 kubelet[2615]: E0904 17:38:44.565948 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:44.566710 containerd[1445]: time="2024-09-04T17:38:44.566230263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hf6kl,Uid:97fc0f1c-4d49-47c0-a204-0725392f4861,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:44.566898 containerd[1445]: time="2024-09-04T17:38:44.566863983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v7vtb,Uid:0b703636-a98c-4502-8b45-5e98626c26a6,Namespace:kube-system,Attempt:0,}" Sep 4 17:38:44.567003 containerd[1445]: time="2024-09-04T17:38:44.566935247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f499bdf-lsbg2,Uid:2c22d79b-aa3b-471c-9579-6849476d0d1d,Namespace:calico-system,Attempt:0,}" Sep 4 17:38:45.111133 systemd[1]: Created slice kubepods-besteffort-pod3d622273_42a7_410b_a788_c97fd7c8d977.slice - libcontainer container kubepods-besteffort-pod3d622273_42a7_410b_a788_c97fd7c8d977.slice. Sep 4 17:38:45.113654 containerd[1445]: time="2024-09-04T17:38:45.113616354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vp2t,Uid:3d622273-42a7-410b-a788-c97fd7c8d977,Namespace:calico-system,Attempt:0,}" Sep 4 17:38:45.212189 kubelet[2615]: E0904 17:38:45.212158 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:45.215351 containerd[1445]: time="2024-09-04T17:38:45.213441800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:38:45.710422 containerd[1445]: time="2024-09-04T17:38:45.710269661Z" level=error msg="Failed to destroy network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.710911 containerd[1445]: time="2024-09-04T17:38:45.710685572Z" level=error msg="encountered an error cleaning up failed sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.710911 containerd[1445]: time="2024-09-04T17:38:45.710729334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v7vtb,Uid:0b703636-a98c-4502-8b45-5e98626c26a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.711055 kubelet[2615]: E0904 17:38:45.711010 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.711532 kubelet[2615]: E0904 17:38:45.711087 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v7vtb" Sep 4 17:38:45.711532 kubelet[2615]: E0904 17:38:45.711113 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v7vtb" Sep 4 17:38:45.711532 kubelet[2615]: E0904 17:38:45.711183 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v7vtb_kube-system(0b703636-a98c-4502-8b45-5e98626c26a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v7vtb_kube-system(0b703636-a98c-4502-8b45-5e98626c26a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v7vtb" podUID="0b703636-a98c-4502-8b45-5e98626c26a6" Sep 4 17:38:45.742192 containerd[1445]: time="2024-09-04T17:38:45.742080233Z" level=error msg="Failed to destroy network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.742868 containerd[1445]: time="2024-09-04T17:38:45.742836453Z" level=error msg="encountered an error cleaning up failed sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.742932 containerd[1445]: time="2024-09-04T17:38:45.742892097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hf6kl,Uid:97fc0f1c-4d49-47c0-a204-0725392f4861,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.743178 kubelet[2615]: E0904 17:38:45.743141 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.743357 kubelet[2615]: E0904 17:38:45.743202 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hf6kl" Sep 4 17:38:45.743357 kubelet[2615]: E0904 17:38:45.743224 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hf6kl" Sep 4 17:38:45.743357 kubelet[2615]: E0904 17:38:45.743284 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hf6kl_kube-system(97fc0f1c-4d49-47c0-a204-0725392f4861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hf6kl_kube-system(97fc0f1c-4d49-47c0-a204-0725392f4861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hf6kl" podUID="97fc0f1c-4d49-47c0-a204-0725392f4861" Sep 4 17:38:45.776932 containerd[1445]: time="2024-09-04T17:38:45.776854819Z" level=error msg="Failed to destroy network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.777277 containerd[1445]: time="2024-09-04T17:38:45.777245263Z" level=error msg="encountered an error cleaning up failed sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.777315 containerd[1445]: time="2024-09-04T17:38:45.777292912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f499bdf-lsbg2,Uid:2c22d79b-aa3b-471c-9579-6849476d0d1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.777595 kubelet[2615]: E0904 17:38:45.777570 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:45.777653 kubelet[2615]: E0904 17:38:45.777630 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f499bdf-lsbg2" Sep 4 17:38:45.777682 kubelet[2615]: E0904 17:38:45.777654 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f499bdf-lsbg2" Sep 4 17:38:45.777730 kubelet[2615]: E0904 17:38:45.777716 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77f499bdf-lsbg2_calico-system(2c22d79b-aa3b-471c-9579-6849476d0d1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77f499bdf-lsbg2_calico-system(2c22d79b-aa3b-471c-9579-6849476d0d1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f499bdf-lsbg2" podUID="2c22d79b-aa3b-471c-9579-6849476d0d1d" Sep 4 17:38:46.215062 kubelet[2615]: I0904 17:38:46.215025 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:38:46.215787 kubelet[2615]: I0904 17:38:46.215770 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:38:46.216543 containerd[1445]: time="2024-09-04T17:38:46.216107479Z" level=info msg="StopPodSandbox for \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\"" Sep 4 17:38:46.216543 containerd[1445]: time="2024-09-04T17:38:46.216266307Z" level=info msg="Ensure that sandbox 19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469 in task-service has been cleanup successfully" Sep 4 17:38:46.216543 containerd[1445]: time="2024-09-04T17:38:46.216508723Z" level=info msg="StopPodSandbox for \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\"" Sep 4 17:38:46.216640 kubelet[2615]: I0904 17:38:46.216623 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:38:46.216972 containerd[1445]: time="2024-09-04T17:38:46.216950252Z" level=info msg="StopPodSandbox for \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\"" Sep 4 17:38:46.217127 containerd[1445]: time="2024-09-04T17:38:46.217093300Z" level=info msg="Ensure that sandbox 6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe in task-service has been cleanup successfully" Sep 4 17:38:46.217533 containerd[1445]: time="2024-09-04T17:38:46.217505684Z" level=info msg="Ensure that sandbox c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250 in task-service has been cleanup successfully" Sep 4 17:38:46.249367 containerd[1445]: time="2024-09-04T17:38:46.249288631Z" level=error msg="StopPodSandbox for \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\" failed" error="failed to destroy network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:46.250670 kubelet[2615]: E0904 17:38:46.250610 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:38:46.250925 kubelet[2615]: E0904 17:38:46.250821 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250"} Sep 4 17:38:46.250925 kubelet[2615]: E0904 17:38:46.250891 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c22d79b-aa3b-471c-9579-6849476d0d1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:46.251254 kubelet[2615]: E0904 17:38:46.251105 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c22d79b-aa3b-471c-9579-6849476d0d1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f499bdf-lsbg2" podUID="2c22d79b-aa3b-471c-9579-6849476d0d1d" Sep 4 17:38:46.251571 containerd[1445]: time="2024-09-04T17:38:46.251543824Z" level=error msg="StopPodSandbox for \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\" failed" error="failed to destroy network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:46.253471 containerd[1445]: time="2024-09-04T17:38:46.253438752Z" level=error msg="StopPodSandbox for \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\" failed" error="failed to destroy network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:46.253568 kubelet[2615]: E0904 17:38:46.253473 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:38:46.253568 kubelet[2615]: E0904 17:38:46.253506 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469"} Sep 4 17:38:46.253626 kubelet[2615]: E0904 17:38:46.253575 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b703636-a98c-4502-8b45-5e98626c26a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:46.253626 kubelet[2615]: E0904 17:38:46.253607 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:38:46.253696 kubelet[2615]: E0904 17:38:46.253626 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b703636-a98c-4502-8b45-5e98626c26a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v7vtb" podUID="0b703636-a98c-4502-8b45-5e98626c26a6" Sep 4 17:38:46.253696 kubelet[2615]: E0904 17:38:46.253632 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe"} Sep 4 17:38:46.253696 kubelet[2615]: E0904 17:38:46.253669 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97fc0f1c-4d49-47c0-a204-0725392f4861\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:46.253801 kubelet[2615]: E0904 17:38:46.253725 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97fc0f1c-4d49-47c0-a204-0725392f4861\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hf6kl" podUID="97fc0f1c-4d49-47c0-a204-0725392f4861" Sep 4 17:38:46.306076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250-shm.mount: Deactivated successfully. Sep 4 17:38:46.306182 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe-shm.mount: Deactivated successfully. Sep 4 17:38:46.306259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469-shm.mount: Deactivated successfully. Sep 4 17:38:46.306694 containerd[1445]: time="2024-09-04T17:38:46.306648181Z" level=error msg="Failed to destroy network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:46.307648 containerd[1445]: time="2024-09-04T17:38:46.307618893Z" level=error msg="encountered an error cleaning up failed sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:46.307738 containerd[1445]: time="2024-09-04T17:38:46.307673726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vp2t,Uid:3d622273-42a7-410b-a788-c97fd7c8d977,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:46.308215 kubelet[2615]: E0904 17:38:46.308173 2615 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:46.308858 kubelet[2615]: E0904 17:38:46.308307 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vp2t" Sep 4 17:38:46.308858 kubelet[2615]: E0904 17:38:46.308396 2615 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vp2t" Sep 4 17:38:46.309136 kubelet[2615]: E0904 17:38:46.308451 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8vp2t_calico-system(3d622273-42a7-410b-a788-c97fd7c8d977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8vp2t_calico-system(3d622273-42a7-410b-a788-c97fd7c8d977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:46.309288 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544-shm.mount: Deactivated successfully. Sep 4 17:38:47.219560 kubelet[2615]: I0904 17:38:47.219521 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:38:47.220150 containerd[1445]: time="2024-09-04T17:38:47.220117594Z" level=info msg="StopPodSandbox for \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\"" Sep 4 17:38:47.220412 containerd[1445]: time="2024-09-04T17:38:47.220386378Z" level=info msg="Ensure that sandbox a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544 in task-service has been cleanup successfully" Sep 4 17:38:47.250539 containerd[1445]: time="2024-09-04T17:38:47.250479478Z" level=error msg="StopPodSandbox for \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\" failed" error="failed to destroy network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:38:47.250784 kubelet[2615]: E0904 17:38:47.250745 2615 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:38:47.250834 kubelet[2615]: E0904 17:38:47.250819 2615 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544"} Sep 4 17:38:47.250868 kubelet[2615]: E0904 17:38:47.250855 2615 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d622273-42a7-410b-a788-c97fd7c8d977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:38:47.250927 kubelet[2615]: E0904 17:38:47.250891 2615 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d622273-42a7-410b-a788-c97fd7c8d977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vp2t" podUID="3d622273-42a7-410b-a788-c97fd7c8d977" Sep 4 17:38:47.759004 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:56034.service - OpenSSH per-connection server daemon (10.0.0.1:56034). Sep 4 17:38:47.805309 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 56034 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:38:47.808470 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:47.825643 systemd-logind[1432]: New session 11 of user core. Sep 4 17:38:47.841945 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:38:48.131294 sshd[3599]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:48.135767 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:56034.service: Deactivated successfully. Sep 4 17:38:48.137756 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:38:48.138538 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:38:48.139456 systemd-logind[1432]: Removed session 11. Sep 4 17:38:51.593778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165986802.mount: Deactivated successfully. Sep 4 17:38:52.359431 containerd[1445]: time="2024-09-04T17:38:52.359274671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:52.385225 containerd[1445]: time="2024-09-04T17:38:52.385115545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:38:52.401962 containerd[1445]: time="2024-09-04T17:38:52.401891057Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:52.433497 containerd[1445]: time="2024-09-04T17:38:52.433438582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:38:52.434007 containerd[1445]: time="2024-09-04T17:38:52.433962195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 7.220470382s" Sep 4 17:38:52.434007 containerd[1445]: time="2024-09-04T17:38:52.433991751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:38:52.443889 containerd[1445]: time="2024-09-04T17:38:52.443561160Z" level=info msg="CreateContainer within sandbox \"b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:38:52.708055 containerd[1445]: time="2024-09-04T17:38:52.707913504Z" level=info msg="CreateContainer within sandbox \"b8a87d5c3559b1f1e0c105812edd13bff297d6a2880d02e6a029e2a29bc92890\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"05242514a4619d589b365926b56de3fc6ecfec0cf50ee0c874b4af92c18e6968\"" Sep 4 17:38:52.708600 containerd[1445]: time="2024-09-04T17:38:52.708555177Z" level=info msg="StartContainer for \"05242514a4619d589b365926b56de3fc6ecfec0cf50ee0c874b4af92c18e6968\"" Sep 4 17:38:52.780840 systemd[1]: Started cri-containerd-05242514a4619d589b365926b56de3fc6ecfec0cf50ee0c874b4af92c18e6968.scope - libcontainer container 05242514a4619d589b365926b56de3fc6ecfec0cf50ee0c874b4af92c18e6968. Sep 4 17:38:52.872741 containerd[1445]: time="2024-09-04T17:38:52.872701227Z" level=info msg="StartContainer for \"05242514a4619d589b365926b56de3fc6ecfec0cf50ee0c874b4af92c18e6968\" returns successfully" Sep 4 17:38:52.888875 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:38:52.889040 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:38:53.144850 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:56046.service - OpenSSH per-connection server daemon (10.0.0.1:56046). Sep 4 17:38:53.187297 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 56046 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:38:53.189199 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:53.193725 systemd-logind[1432]: New session 12 of user core. Sep 4 17:38:53.201483 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:38:53.239737 kubelet[2615]: E0904 17:38:53.239701 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:53.370697 sshd[3676]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:53.374650 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:56046.service: Deactivated successfully. Sep 4 17:38:53.378941 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:38:53.380658 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:38:53.381838 systemd-logind[1432]: Removed session 12. Sep 4 17:38:54.633378 kernel: bpftool[3830]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:38:54.897280 systemd-networkd[1386]: vxlan.calico: Link UP Sep 4 17:38:54.897288 systemd-networkd[1386]: vxlan.calico: Gained carrier Sep 4 17:38:55.990158 kubelet[2615]: I0904 17:38:55.990100 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:38:55.990809 kubelet[2615]: E0904 17:38:55.990785 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:56.025947 systemd[1]: run-containerd-runc-k8s.io-05242514a4619d589b365926b56de3fc6ecfec0cf50ee0c874b4af92c18e6968-runc.107WMT.mount: Deactivated successfully. Sep 4 17:38:56.355483 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Sep 4 17:38:58.105520 containerd[1445]: time="2024-09-04T17:38:58.105458563Z" level=info msg="StopPodSandbox for \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\"" Sep 4 17:38:58.105520 containerd[1445]: time="2024-09-04T17:38:58.105503918Z" level=info msg="StopPodSandbox for \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\"" Sep 4 17:38:58.381643 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:56042.service - OpenSSH per-connection server daemon (10.0.0.1:56042). Sep 4 17:38:58.688246 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 56042 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:38:58.690159 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:38:58.695738 systemd-logind[1432]: New session 13 of user core. Sep 4 17:38:58.702614 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:38:58.783325 kubelet[2615]: I0904 17:38:58.783283 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-stthc" podStartSLOduration=7.894690214 podStartE2EDuration="29.781688573s" podCreationTimestamp="2024-09-04 17:38:29 +0000 UTC" firstStartedPulling="2024-09-04 17:38:30.547281823 +0000 UTC m=+21.535350563" lastFinishedPulling="2024-09-04 17:38:52.434280192 +0000 UTC m=+43.422348922" observedRunningTime="2024-09-04 17:38:53.339519748 +0000 UTC m=+44.327588488" watchObservedRunningTime="2024-09-04 17:38:58.781688573 +0000 UTC m=+49.769757313" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.782 [INFO][3977] k8s.go 608: Cleaning up netns ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.784 [INFO][3977] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" iface="eth0" netns="/var/run/netns/cni-cbfb220a-0d6c-649c-7df3-2bb23a6f966a" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.784 [INFO][3977] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" iface="eth0" netns="/var/run/netns/cni-cbfb220a-0d6c-649c-7df3-2bb23a6f966a" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.785 [INFO][3977] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" iface="eth0" netns="/var/run/netns/cni-cbfb220a-0d6c-649c-7df3-2bb23a6f966a" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.785 [INFO][3977] k8s.go 615: Releasing IP address(es) ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.785 [INFO][3977] utils.go 188: Calico CNI releasing IP address ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.848 [INFO][4011] ipam_plugin.go 417: Releasing address using handleID ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.848 [INFO][4011] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.848 [INFO][4011] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.859 [WARNING][4011] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.859 [INFO][4011] ipam_plugin.go 445: Releasing address using workloadID ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.860 [INFO][4011] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:58.873582 containerd[1445]: 2024-09-04 17:38:58.863 [INFO][3977] k8s.go 621: Teardown processing complete. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:38:58.874259 containerd[1445]: time="2024-09-04T17:38:58.873642802Z" level=info msg="TearDown network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\" successfully" Sep 4 17:38:58.874259 containerd[1445]: time="2024-09-04T17:38:58.873686434Z" level=info msg="StopPodSandbox for \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\" returns successfully" Sep 4 17:38:58.874321 kubelet[2615]: E0904 17:38:58.874200 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:38:58.874904 containerd[1445]: time="2024-09-04T17:38:58.874863082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v7vtb,Uid:0b703636-a98c-4502-8b45-5e98626c26a6,Namespace:kube-system,Attempt:1,}" Sep 4 17:38:58.879028 systemd[1]: run-netns-cni\x2dcbfb220a\x2d0d6c\x2d649c\x2d7df3\x2d2bb23a6f966a.mount: Deactivated successfully. Sep 4 17:38:58.884589 sshd[3983]: pam_unix(sshd:session): session closed for user core Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.781 [INFO][3981] k8s.go 608: Cleaning up netns ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.783 [INFO][3981] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" iface="eth0" netns="/var/run/netns/cni-07ab6673-2dc5-785c-a2c1-9ac87d44743a" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.785 [INFO][3981] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" iface="eth0" netns="/var/run/netns/cni-07ab6673-2dc5-785c-a2c1-9ac87d44743a" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.786 [INFO][3981] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" iface="eth0" netns="/var/run/netns/cni-07ab6673-2dc5-785c-a2c1-9ac87d44743a" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.786 [INFO][3981] k8s.go 615: Releasing IP address(es) ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.786 [INFO][3981] utils.go 188: Calico CNI releasing IP address ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.849 [INFO][4010] ipam_plugin.go 417: Releasing address using handleID ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.850 [INFO][4010] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.862 [INFO][4010] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.869 [WARNING][4010] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.869 [INFO][4010] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.875 [INFO][4010] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:38:58.887489 containerd[1445]: 2024-09-04 17:38:58.881 [INFO][3981] k8s.go 621: Teardown processing complete. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:38:58.887489 containerd[1445]: time="2024-09-04T17:38:58.887389365Z" level=info msg="TearDown network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\" successfully" Sep 4 17:38:58.887489 containerd[1445]: time="2024-09-04T17:38:58.887432015Z" level=info msg="StopPodSandbox for \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\" returns successfully" Sep 4 17:38:58.886946 systemd[1]: run-netns-cni\x2d07ab6673\x2d2dc5\x2d785c\x2da2c1\x2d9ac87d44743a.mount: Deactivated successfully. Sep 4 17:38:58.888124 containerd[1445]: time="2024-09-04T17:38:58.888090219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vp2t,Uid:3d622273-42a7-410b-a788-c97fd7c8d977,Namespace:calico-system,Attempt:1,}" Sep 4 17:38:58.891894 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:56042.service: Deactivated successfully. Sep 4 17:38:58.894305 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:38:58.894925 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:38:58.895777 systemd-logind[1432]: Removed session 13. Sep 4 17:39:00.105458 containerd[1445]: time="2024-09-04T17:39:00.105083425Z" level=info msg="StopPodSandbox for \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\"" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.153 [INFO][4044] k8s.go 608: Cleaning up netns ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.153 [INFO][4044] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" iface="eth0" netns="/var/run/netns/cni-e1098cfc-cf0f-6fd8-7707-0e68c7243609" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.154 [INFO][4044] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" iface="eth0" netns="/var/run/netns/cni-e1098cfc-cf0f-6fd8-7707-0e68c7243609" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.154 [INFO][4044] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" iface="eth0" netns="/var/run/netns/cni-e1098cfc-cf0f-6fd8-7707-0e68c7243609" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.154 [INFO][4044] k8s.go 615: Releasing IP address(es) ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.154 [INFO][4044] utils.go 188: Calico CNI releasing IP address ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.176 [INFO][4052] ipam_plugin.go 417: Releasing address using handleID ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.176 [INFO][4052] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.176 [INFO][4052] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.259 [WARNING][4052] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.259 [INFO][4052] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.261 [INFO][4052] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:00.266700 containerd[1445]: 2024-09-04 17:39:00.264 [INFO][4044] k8s.go 621: Teardown processing complete. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:00.268980 containerd[1445]: time="2024-09-04T17:39:00.266926591Z" level=info msg="TearDown network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\" successfully" Sep 4 17:39:00.268980 containerd[1445]: time="2024-09-04T17:39:00.266952722Z" level=info msg="StopPodSandbox for \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\" returns successfully" Sep 4 17:39:00.268980 containerd[1445]: time="2024-09-04T17:39:00.267647665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hf6kl,Uid:97fc0f1c-4d49-47c0-a204-0725392f4861,Namespace:kube-system,Attempt:1,}" Sep 4 17:39:00.269054 kubelet[2615]: E0904 17:39:00.267294 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:00.269767 systemd[1]: run-netns-cni\x2de1098cfc\x2dcf0f\x2d6fd8\x2d7707\x2d0e68c7243609.mount: Deactivated successfully. Sep 4 17:39:00.530743 systemd-networkd[1386]: cali3a25d341758: Link UP Sep 4 17:39:00.534416 systemd-networkd[1386]: cali3a25d341758: Gained carrier Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.402 [INFO][4067] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8vp2t-eth0 csi-node-driver- calico-system 3d622273-42a7-410b-a788-c97fd7c8d977 837 0 2024-09-04 17:38:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-8vp2t eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3a25d341758 [] []}} ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.402 [INFO][4067] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.441 [INFO][4081] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" HandleID="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.447 [INFO][4081] ipam_plugin.go 270: Auto assigning IP ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" HandleID="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004fb7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8vp2t", "timestamp":"2024-09-04 17:39:00.441031571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.447 [INFO][4081] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.448 [INFO][4081] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.448 [INFO][4081] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.449 [INFO][4081] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.455 [INFO][4081] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.460 [INFO][4081] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.463 [INFO][4081] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.464 [INFO][4081] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.464 [INFO][4081] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.466 [INFO][4081] ipam.go 1685: Creating new handle: k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2 Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.498 [INFO][4081] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.510 [INFO][4081] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.510 [INFO][4081] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" host="localhost" Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.510 [INFO][4081] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:00.557032 containerd[1445]: 2024-09-04 17:39:00.510 [INFO][4081] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" HandleID="k8s-pod-network.dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:00.557602 containerd[1445]: 2024-09-04 17:39:00.516 [INFO][4067] k8s.go 386: Populated endpoint ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8vp2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d622273-42a7-410b-a788-c97fd7c8d977", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8vp2t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3a25d341758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:00.557602 containerd[1445]: 2024-09-04 17:39:00.517 [INFO][4067] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:00.557602 containerd[1445]: 2024-09-04 17:39:00.517 [INFO][4067] dataplane_linux.go 68: Setting the host side veth name to cali3a25d341758 ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:00.557602 containerd[1445]: 2024-09-04 17:39:00.532 [INFO][4067] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:00.557602 containerd[1445]: 2024-09-04 17:39:00.534 [INFO][4067] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8vp2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d622273-42a7-410b-a788-c97fd7c8d977", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2", Pod:"csi-node-driver-8vp2t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3a25d341758", MAC:"f2:b1:bd:1e:8e:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:00.557602 containerd[1445]: 2024-09-04 17:39:00.551 [INFO][4067] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2" Namespace="calico-system" Pod="csi-node-driver-8vp2t" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:00.696411 containerd[1445]: time="2024-09-04T17:39:00.695031110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:39:00.696411 containerd[1445]: time="2024-09-04T17:39:00.695158208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:39:00.696411 containerd[1445]: time="2024-09-04T17:39:00.695172315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:00.697808 containerd[1445]: time="2024-09-04T17:39:00.697749160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:00.714238 systemd[1]: Started cri-containerd-dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2.scope - libcontainer container dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2. Sep 4 17:39:00.719201 systemd-networkd[1386]: calib7d88029c38: Link UP Sep 4 17:39:00.720258 systemd-networkd[1386]: calib7d88029c38: Gained carrier Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.464 [INFO][4087] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--v7vtb-eth0 coredns-76f75df574- kube-system 0b703636-a98c-4502-8b45-5e98626c26a6 836 0 2024-09-04 17:38:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-v7vtb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7d88029c38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.464 [INFO][4087] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.495 [INFO][4103] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" HandleID="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.523 [INFO][4103] ipam_plugin.go 270: Auto assigning IP ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" HandleID="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366e80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-v7vtb", "timestamp":"2024-09-04 17:39:00.495168471 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.523 [INFO][4103] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.523 [INFO][4103] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.523 [INFO][4103] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.528 [INFO][4103] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.540 [INFO][4103] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.673 [INFO][4103] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.680 [INFO][4103] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.682 [INFO][4103] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.683 [INFO][4103] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.684 [INFO][4103] ipam.go 1685: Creating new handle: k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386 Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.690 [INFO][4103] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.711 [INFO][4103] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.712 [INFO][4103] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" host="localhost" Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.712 [INFO][4103] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:00.733587 containerd[1445]: 2024-09-04 17:39:00.712 [INFO][4103] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" HandleID="k8s-pod-network.f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:00.734167 containerd[1445]: 2024-09-04 17:39:00.715 [INFO][4087] k8s.go 386: Populated endpoint ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--v7vtb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b703636-a98c-4502-8b45-5e98626c26a6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-v7vtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7d88029c38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:00.734167 containerd[1445]: 2024-09-04 17:39:00.716 [INFO][4087] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:00.734167 containerd[1445]: 2024-09-04 17:39:00.716 [INFO][4087] dataplane_linux.go 68: Setting the host side veth name to calib7d88029c38 ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:00.734167 containerd[1445]: 2024-09-04 17:39:00.719 [INFO][4087] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:00.734167 containerd[1445]: 2024-09-04 17:39:00.719 [INFO][4087] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--v7vtb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b703636-a98c-4502-8b45-5e98626c26a6", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386", Pod:"coredns-76f75df574-v7vtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7d88029c38", MAC:"32:af:bd:27:33:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:00.734167 containerd[1445]: 2024-09-04 17:39:00.730 [INFO][4087] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386" Namespace="kube-system" Pod="coredns-76f75df574-v7vtb" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:00.748034 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:39:00.773193 containerd[1445]: time="2024-09-04T17:39:00.773127119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vp2t,Uid:3d622273-42a7-410b-a788-c97fd7c8d977,Namespace:calico-system,Attempt:1,} returns sandbox id \"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2\"" Sep 4 17:39:00.774950 containerd[1445]: time="2024-09-04T17:39:00.774903235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:39:00.786531 containerd[1445]: time="2024-09-04T17:39:00.786053095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:39:00.786531 containerd[1445]: time="2024-09-04T17:39:00.786121879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:39:00.786531 containerd[1445]: time="2024-09-04T17:39:00.786140334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:00.786531 containerd[1445]: time="2024-09-04T17:39:00.786227064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:00.805469 systemd[1]: Started cri-containerd-f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386.scope - libcontainer container f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386. Sep 4 17:39:00.817050 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:39:00.840444 containerd[1445]: time="2024-09-04T17:39:00.840388772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v7vtb,Uid:0b703636-a98c-4502-8b45-5e98626c26a6,Namespace:kube-system,Attempt:1,} returns sandbox id \"f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386\"" Sep 4 17:39:00.841047 kubelet[2615]: E0904 17:39:00.841024 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:00.843010 containerd[1445]: time="2024-09-04T17:39:00.842974765Z" level=info msg="CreateContainer within sandbox \"f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:39:00.867060 systemd-networkd[1386]: cali970ce37627a: Link UP Sep 4 17:39:00.867291 systemd-networkd[1386]: cali970ce37627a: Gained carrier Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.682 [INFO][4127] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--hf6kl-eth0 coredns-76f75df574- kube-system 97fc0f1c-4d49-47c0-a204-0725392f4861 844 0 2024-09-04 17:38:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-hf6kl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali970ce37627a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.682 [INFO][4127] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.725 [INFO][4151] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" HandleID="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.746 [INFO][4151] ipam_plugin.go 270: Auto assigning IP ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" HandleID="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000321770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-hf6kl", "timestamp":"2024-09-04 17:39:00.725361919 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.746 [INFO][4151] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.746 [INFO][4151] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.746 [INFO][4151] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.748 [INFO][4151] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.753 [INFO][4151] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.760 [INFO][4151] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.764 [INFO][4151] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.769 [INFO][4151] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.769 [INFO][4151] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.773 [INFO][4151] ipam.go 1685: Creating new handle: k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.851 [INFO][4151] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.861 [INFO][4151] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.861 [INFO][4151] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" host="localhost" Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.861 [INFO][4151] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:00.906295 containerd[1445]: 2024-09-04 17:39:00.861 [INFO][4151] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" HandleID="k8s-pod-network.c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.906987 containerd[1445]: 2024-09-04 17:39:00.864 [INFO][4127] k8s.go 386: Populated endpoint ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hf6kl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"97fc0f1c-4d49-47c0-a204-0725392f4861", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-hf6kl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali970ce37627a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:00.906987 containerd[1445]: 2024-09-04 17:39:00.865 [INFO][4127] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.906987 containerd[1445]: 2024-09-04 17:39:00.865 [INFO][4127] dataplane_linux.go 68: Setting the host side veth name to cali970ce37627a ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.906987 containerd[1445]: 2024-09-04 17:39:00.867 [INFO][4127] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:00.906987 containerd[1445]: 2024-09-04 17:39:00.867 [INFO][4127] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hf6kl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"97fc0f1c-4d49-47c0-a204-0725392f4861", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa", Pod:"coredns-76f75df574-hf6kl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali970ce37627a", MAC:"da:1f:b4:21:4d:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:00.906987 containerd[1445]: 2024-09-04 17:39:00.902 [INFO][4127] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa" Namespace="kube-system" Pod="coredns-76f75df574-hf6kl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:01.035712 containerd[1445]: time="2024-09-04T17:39:01.035599889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:39:01.035712 containerd[1445]: time="2024-09-04T17:39:01.035664415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:39:01.035712 containerd[1445]: time="2024-09-04T17:39:01.035678132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:01.035925 containerd[1445]: time="2024-09-04T17:39:01.035760261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:01.060714 systemd[1]: Started cri-containerd-c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa.scope - libcontainer container c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa. Sep 4 17:39:01.076914 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:39:01.101477 containerd[1445]: time="2024-09-04T17:39:01.101439024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hf6kl,Uid:97fc0f1c-4d49-47c0-a204-0725392f4861,Namespace:kube-system,Attempt:1,} returns sandbox id \"c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa\"" Sep 4 17:39:01.102080 kubelet[2615]: E0904 17:39:01.102061 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:01.103747 containerd[1445]: time="2024-09-04T17:39:01.103713376Z" level=info msg="CreateContainer within sandbox \"c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:39:01.105257 containerd[1445]: time="2024-09-04T17:39:01.105234744Z" level=info msg="StopPodSandbox for \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\"" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.169 [INFO][4321] k8s.go 608: Cleaning up netns ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.169 [INFO][4321] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" iface="eth0" netns="/var/run/netns/cni-642fa2cd-a04c-4469-e5bd-5a9b8be4f889" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.170 [INFO][4321] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" iface="eth0" netns="/var/run/netns/cni-642fa2cd-a04c-4469-e5bd-5a9b8be4f889" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.170 [INFO][4321] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" iface="eth0" netns="/var/run/netns/cni-642fa2cd-a04c-4469-e5bd-5a9b8be4f889" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.170 [INFO][4321] k8s.go 615: Releasing IP address(es) ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.170 [INFO][4321] utils.go 188: Calico CNI releasing IP address ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.191 [INFO][4329] ipam_plugin.go 417: Releasing address using handleID ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.192 [INFO][4329] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.192 [INFO][4329] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.197 [WARNING][4329] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.197 [INFO][4329] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.198 [INFO][4329] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:01.202947 containerd[1445]: 2024-09-04 17:39:01.200 [INFO][4321] k8s.go 621: Teardown processing complete. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:01.203742 containerd[1445]: time="2024-09-04T17:39:01.203179981Z" level=info msg="TearDown network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\" successfully" Sep 4 17:39:01.203742 containerd[1445]: time="2024-09-04T17:39:01.203243935Z" level=info msg="StopPodSandbox for \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\" returns successfully" Sep 4 17:39:01.204228 containerd[1445]: time="2024-09-04T17:39:01.204206607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f499bdf-lsbg2,Uid:2c22d79b-aa3b-471c-9579-6849476d0d1d,Namespace:calico-system,Attempt:1,}" Sep 4 17:39:01.255828 systemd[1]: run-netns-cni\x2d642fa2cd\x2da04c\x2d4469\x2de5bd\x2d5a9b8be4f889.mount: Deactivated successfully. Sep 4 17:39:01.260486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897382190.mount: Deactivated successfully. Sep 4 17:39:01.667517 systemd-networkd[1386]: cali3a25d341758: Gained IPv6LL Sep 4 17:39:01.707841 containerd[1445]: time="2024-09-04T17:39:01.707779249Z" level=info msg="CreateContainer within sandbox \"f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"336bc37e672ae16a602cb6eec6d7ef7f1c6c9371e0c3150d84730f5ef010b47c\"" Sep 4 17:39:01.708368 containerd[1445]: time="2024-09-04T17:39:01.708309850Z" level=info msg="StartContainer for \"336bc37e672ae16a602cb6eec6d7ef7f1c6c9371e0c3150d84730f5ef010b47c\"" Sep 4 17:39:01.734447 systemd[1]: Started cri-containerd-336bc37e672ae16a602cb6eec6d7ef7f1c6c9371e0c3150d84730f5ef010b47c.scope - libcontainer container 336bc37e672ae16a602cb6eec6d7ef7f1c6c9371e0c3150d84730f5ef010b47c. Sep 4 17:39:01.768253 containerd[1445]: time="2024-09-04T17:39:01.768100667Z" level=info msg="CreateContainer within sandbox \"c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d74d151782333d0a50b92ec923358deb94bff44194b7d7f410f56edba4d9730f\"" Sep 4 17:39:01.768946 containerd[1445]: time="2024-09-04T17:39:01.768724930Z" level=info msg="StartContainer for \"d74d151782333d0a50b92ec923358deb94bff44194b7d7f410f56edba4d9730f\"" Sep 4 17:39:01.798475 systemd[1]: Started cri-containerd-d74d151782333d0a50b92ec923358deb94bff44194b7d7f410f56edba4d9730f.scope - libcontainer container d74d151782333d0a50b92ec923358deb94bff44194b7d7f410f56edba4d9730f. Sep 4 17:39:01.983821 containerd[1445]: time="2024-09-04T17:39:01.983707702Z" level=info msg="StartContainer for \"336bc37e672ae16a602cb6eec6d7ef7f1c6c9371e0c3150d84730f5ef010b47c\" returns successfully" Sep 4 17:39:01.983821 containerd[1445]: time="2024-09-04T17:39:01.983709375Z" level=info msg="StartContainer for \"d74d151782333d0a50b92ec923358deb94bff44194b7d7f410f56edba4d9730f\" returns successfully" Sep 4 17:39:01.988816 systemd-networkd[1386]: calib7d88029c38: Gained IPv6LL Sep 4 17:39:02.178295 systemd-networkd[1386]: cali4ffb9a80ab2: Link UP Sep 4 17:39:02.178527 systemd-networkd[1386]: cali4ffb9a80ab2: Gained carrier Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.046 [INFO][4405] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0 calico-kube-controllers-77f499bdf- calico-system 2c22d79b-aa3b-471c-9579-6849476d0d1d 868 0 2024-09-04 17:38:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77f499bdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77f499bdf-lsbg2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4ffb9a80ab2 [] []}} ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.046 [INFO][4405] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.083 [INFO][4418] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" HandleID="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.137 [INFO][4418] ipam_plugin.go 270: Auto assigning IP ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" HandleID="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77f499bdf-lsbg2", "timestamp":"2024-09-04 17:39:02.083602171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.137 [INFO][4418] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.138 [INFO][4418] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.138 [INFO][4418] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.139 [INFO][4418] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.143 [INFO][4418] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.147 [INFO][4418] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.149 [INFO][4418] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.151 [INFO][4418] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.151 [INFO][4418] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.152 [INFO][4418] ipam.go 1685: Creating new handle: k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9 Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.155 [INFO][4418] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.171 [INFO][4418] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.171 [INFO][4418] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" host="localhost" Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.171 [INFO][4418] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:02.236526 containerd[1445]: 2024-09-04 17:39:02.171 [INFO][4418] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" HandleID="k8s-pod-network.57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:02.237327 containerd[1445]: 2024-09-04 17:39:02.175 [INFO][4405] k8s.go 386: Populated endpoint ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0", GenerateName:"calico-kube-controllers-77f499bdf-", Namespace:"calico-system", SelfLink:"", UID:"2c22d79b-aa3b-471c-9579-6849476d0d1d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f499bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77f499bdf-lsbg2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ffb9a80ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:02.237327 containerd[1445]: 2024-09-04 17:39:02.175 [INFO][4405] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:02.237327 containerd[1445]: 2024-09-04 17:39:02.175 [INFO][4405] dataplane_linux.go 68: Setting the host side veth name to cali4ffb9a80ab2 ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:02.237327 containerd[1445]: 2024-09-04 17:39:02.177 [INFO][4405] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:02.237327 containerd[1445]: 2024-09-04 17:39:02.177 [INFO][4405] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0", GenerateName:"calico-kube-controllers-77f499bdf-", Namespace:"calico-system", SelfLink:"", UID:"2c22d79b-aa3b-471c-9579-6849476d0d1d", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f499bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9", Pod:"calico-kube-controllers-77f499bdf-lsbg2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ffb9a80ab2", MAC:"ba:80:2d:04:34:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:02.237327 containerd[1445]: 2024-09-04 17:39:02.231 [INFO][4405] k8s.go 500: Wrote updated endpoint to datastore ContainerID="57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9" Namespace="calico-system" Pod="calico-kube-controllers-77f499bdf-lsbg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:02.282703 kubelet[2615]: E0904 17:39:02.282661 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:02.284326 kubelet[2615]: E0904 17:39:02.284310 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:02.325447 containerd[1445]: time="2024-09-04T17:39:02.325139582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:39:02.325447 containerd[1445]: time="2024-09-04T17:39:02.325221741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:39:02.325447 containerd[1445]: time="2024-09-04T17:39:02.325236449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:02.326824 containerd[1445]: time="2024-09-04T17:39:02.326642019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:02.354483 systemd[1]: Started cri-containerd-57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9.scope - libcontainer container 57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9. Sep 4 17:39:02.370481 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:39:02.394956 containerd[1445]: time="2024-09-04T17:39:02.394909316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f499bdf-lsbg2,Uid:2c22d79b-aa3b-471c-9579-6849476d0d1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9\"" Sep 4 17:39:02.527827 kubelet[2615]: I0904 17:39:02.527774 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-v7vtb" podStartSLOduration=40.527730695 podStartE2EDuration="40.527730695s" podCreationTimestamp="2024-09-04 17:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:39:02.367869085 +0000 UTC m=+53.355937825" watchObservedRunningTime="2024-09-04 17:39:02.527730695 +0000 UTC m=+53.515799425" Sep 4 17:39:02.528304 kubelet[2615]: I0904 17:39:02.528067 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hf6kl" podStartSLOduration=40.528041529 podStartE2EDuration="40.528041529s" podCreationTimestamp="2024-09-04 17:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:39:02.527435091 +0000 UTC m=+53.515503832" watchObservedRunningTime="2024-09-04 17:39:02.528041529 +0000 UTC m=+53.516110269" Sep 4 17:39:02.563556 systemd-networkd[1386]: cali970ce37627a: Gained IPv6LL Sep 4 17:39:03.286852 kubelet[2615]: E0904 17:39:03.286603 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:03.286852 kubelet[2615]: E0904 17:39:03.286694 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:03.784905 containerd[1445]: time="2024-09-04T17:39:03.784839751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:03.798346 containerd[1445]: time="2024-09-04T17:39:03.798302815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:39:03.809015 containerd[1445]: time="2024-09-04T17:39:03.808980865Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:03.822922 containerd[1445]: time="2024-09-04T17:39:03.822858094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:03.823897 containerd[1445]: time="2024-09-04T17:39:03.823851280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 3.048899691s" Sep 4 17:39:03.824018 containerd[1445]: time="2024-09-04T17:39:03.823896678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:39:03.824823 containerd[1445]: time="2024-09-04T17:39:03.824793219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:39:03.826256 containerd[1445]: time="2024-09-04T17:39:03.826231219Z" level=info msg="CreateContainer within sandbox \"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:39:03.897517 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:56058.service - OpenSSH per-connection server daemon (10.0.0.1:56058). Sep 4 17:39:04.045328 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 56058 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:04.047428 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:04.052181 systemd-logind[1432]: New session 14 of user core. Sep 4 17:39:04.060452 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:39:04.070845 containerd[1445]: time="2024-09-04T17:39:04.070788419Z" level=info msg="CreateContainer within sandbox \"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3b952aedf623df26f14225bf54fa8d57ebb83056ab12c7c0bcdde1c1d4f5077a\"" Sep 4 17:39:04.072540 containerd[1445]: time="2024-09-04T17:39:04.071284100Z" level=info msg="StartContainer for \"3b952aedf623df26f14225bf54fa8d57ebb83056ab12c7c0bcdde1c1d4f5077a\"" Sep 4 17:39:04.099562 systemd-networkd[1386]: cali4ffb9a80ab2: Gained IPv6LL Sep 4 17:39:04.105518 systemd[1]: Started cri-containerd-3b952aedf623df26f14225bf54fa8d57ebb83056ab12c7c0bcdde1c1d4f5077a.scope - libcontainer container 3b952aedf623df26f14225bf54fa8d57ebb83056ab12c7c0bcdde1c1d4f5077a. Sep 4 17:39:04.176639 containerd[1445]: time="2024-09-04T17:39:04.176560306Z" level=info msg="StartContainer for \"3b952aedf623df26f14225bf54fa8d57ebb83056ab12c7c0bcdde1c1d4f5077a\" returns successfully" Sep 4 17:39:04.290506 kubelet[2615]: E0904 17:39:04.290475 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:04.291095 kubelet[2615]: E0904 17:39:04.290728 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:04.384727 sshd[4501]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:04.391190 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:56058.service: Deactivated successfully. Sep 4 17:39:04.392801 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:39:04.393786 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:39:04.400736 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:56072.service - OpenSSH per-connection server daemon (10.0.0.1:56072). Sep 4 17:39:04.402077 systemd-logind[1432]: Removed session 14. Sep 4 17:39:04.435765 sshd[4553]: Accepted publickey for core from 10.0.0.1 port 56072 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:04.437485 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:04.442965 systemd-logind[1432]: New session 15 of user core. Sep 4 17:39:04.447584 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:39:04.637191 sshd[4553]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:04.648486 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:56072.service: Deactivated successfully. Sep 4 17:39:04.650782 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:39:04.652307 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:39:04.660809 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:56078.service - OpenSSH per-connection server daemon (10.0.0.1:56078). Sep 4 17:39:04.663491 systemd-logind[1432]: Removed session 15. Sep 4 17:39:04.705696 sshd[4565]: Accepted publickey for core from 10.0.0.1 port 56078 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:04.707178 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:04.710938 systemd-logind[1432]: New session 16 of user core. Sep 4 17:39:04.720473 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:39:04.936830 sshd[4565]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:04.941231 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:56078.service: Deactivated successfully. Sep 4 17:39:04.943317 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:39:04.944149 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:39:04.945103 systemd-logind[1432]: Removed session 16. Sep 4 17:39:05.292641 kubelet[2615]: E0904 17:39:05.292609 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:06.214453 containerd[1445]: time="2024-09-04T17:39:06.214379031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:06.215629 containerd[1445]: time="2024-09-04T17:39:06.215588311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:39:06.216963 containerd[1445]: time="2024-09-04T17:39:06.216927363Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:06.219120 containerd[1445]: time="2024-09-04T17:39:06.219081792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:06.219731 containerd[1445]: time="2024-09-04T17:39:06.219669749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.394845181s" Sep 4 17:39:06.219731 containerd[1445]: time="2024-09-04T17:39:06.219723604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:39:06.221078 containerd[1445]: time="2024-09-04T17:39:06.220927283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:39:06.231748 containerd[1445]: time="2024-09-04T17:39:06.231622852Z" level=info msg="CreateContainer within sandbox \"57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:39:06.245852 containerd[1445]: time="2024-09-04T17:39:06.245807262Z" level=info msg="CreateContainer within sandbox \"57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"552428a76a9d8dd8691e17a1bb5dfc082e5e68a44469d27ea50535f030518853\"" Sep 4 17:39:06.246384 containerd[1445]: time="2024-09-04T17:39:06.246351887Z" level=info msg="StartContainer for \"552428a76a9d8dd8691e17a1bb5dfc082e5e68a44469d27ea50535f030518853\"" Sep 4 17:39:06.271476 systemd[1]: Started cri-containerd-552428a76a9d8dd8691e17a1bb5dfc082e5e68a44469d27ea50535f030518853.scope - libcontainer container 552428a76a9d8dd8691e17a1bb5dfc082e5e68a44469d27ea50535f030518853. Sep 4 17:39:06.313763 containerd[1445]: time="2024-09-04T17:39:06.313640467Z" level=info msg="StartContainer for \"552428a76a9d8dd8691e17a1bb5dfc082e5e68a44469d27ea50535f030518853\" returns successfully" Sep 4 17:39:07.310808 kubelet[2615]: I0904 17:39:07.309705 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77f499bdf-lsbg2" podStartSLOduration=34.485557458 podStartE2EDuration="38.309656084s" podCreationTimestamp="2024-09-04 17:38:29 +0000 UTC" firstStartedPulling="2024-09-04 17:39:02.395927193 +0000 UTC m=+53.383995933" lastFinishedPulling="2024-09-04 17:39:06.220025819 +0000 UTC m=+57.208094559" observedRunningTime="2024-09-04 17:39:07.3092192 +0000 UTC m=+58.297287930" watchObservedRunningTime="2024-09-04 17:39:07.309656084 +0000 UTC m=+58.297724824" Sep 4 17:39:08.332570 containerd[1445]: time="2024-09-04T17:39:08.332512803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:08.333450 containerd[1445]: time="2024-09-04T17:39:08.333412471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:39:08.334664 containerd[1445]: time="2024-09-04T17:39:08.334639072Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:08.336810 containerd[1445]: time="2024-09-04T17:39:08.336777753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:08.337445 containerd[1445]: time="2024-09-04T17:39:08.337416588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.116438536s" Sep 4 17:39:08.337483 containerd[1445]: time="2024-09-04T17:39:08.337448158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:39:08.340214 containerd[1445]: time="2024-09-04T17:39:08.340183724Z" level=info msg="CreateContainer within sandbox \"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:39:08.355896 containerd[1445]: time="2024-09-04T17:39:08.355867446Z" level=info msg="CreateContainer within sandbox \"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2c6a8507ee1cd35d288de5bc01b55bf9f5a432cb1c4726616f23adc66529f07b\"" Sep 4 17:39:08.356259 containerd[1445]: time="2024-09-04T17:39:08.356240407Z" level=info msg="StartContainer for \"2c6a8507ee1cd35d288de5bc01b55bf9f5a432cb1c4726616f23adc66529f07b\"" Sep 4 17:39:08.387045 systemd[1]: Started cri-containerd-2c6a8507ee1cd35d288de5bc01b55bf9f5a432cb1c4726616f23adc66529f07b.scope - libcontainer container 2c6a8507ee1cd35d288de5bc01b55bf9f5a432cb1c4726616f23adc66529f07b. Sep 4 17:39:08.420947 containerd[1445]: time="2024-09-04T17:39:08.420891937Z" level=info msg="StartContainer for \"2c6a8507ee1cd35d288de5bc01b55bf9f5a432cb1c4726616f23adc66529f07b\" returns successfully" Sep 4 17:39:09.095821 containerd[1445]: time="2024-09-04T17:39:09.095472072Z" level=info msg="StopPodSandbox for \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\"" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.127 [WARNING][4716] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8vp2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d622273-42a7-410b-a788-c97fd7c8d977", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2", Pod:"csi-node-driver-8vp2t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3a25d341758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.128 [INFO][4716] k8s.go 608: Cleaning up netns ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.128 [INFO][4716] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" iface="eth0" netns="" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.128 [INFO][4716] k8s.go 615: Releasing IP address(es) ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.128 [INFO][4716] utils.go 188: Calico CNI releasing IP address ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.148 [INFO][4725] ipam_plugin.go 417: Releasing address using handleID ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.148 [INFO][4725] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.148 [INFO][4725] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.153 [WARNING][4725] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.154 [INFO][4725] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.155 [INFO][4725] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.160054 containerd[1445]: 2024-09-04 17:39:09.157 [INFO][4716] k8s.go 621: Teardown processing complete. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.160711 containerd[1445]: time="2024-09-04T17:39:09.160651965Z" level=info msg="TearDown network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\" successfully" Sep 4 17:39:09.160711 containerd[1445]: time="2024-09-04T17:39:09.160683996Z" level=info msg="StopPodSandbox for \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\" returns successfully" Sep 4 17:39:09.167199 containerd[1445]: time="2024-09-04T17:39:09.167149408Z" level=info msg="RemovePodSandbox for \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\"" Sep 4 17:39:09.170666 containerd[1445]: time="2024-09-04T17:39:09.170619678Z" level=info msg="Forcibly stopping sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\"" Sep 4 17:39:09.204717 kubelet[2615]: I0904 17:39:09.204664 2615 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:39:09.205844 kubelet[2615]: I0904 17:39:09.205833 2615 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.208 [WARNING][4748] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8vp2t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3d622273-42a7-410b-a788-c97fd7c8d977", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dba2ae43e327be27401f2fe3cc8002514ae54f383125b9e4789048ad537dd7a2", Pod:"csi-node-driver-8vp2t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3a25d341758", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.208 [INFO][4748] k8s.go 608: Cleaning up netns ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.208 [INFO][4748] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" iface="eth0" netns="" Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.209 [INFO][4748] k8s.go 615: Releasing IP address(es) ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.209 [INFO][4748] utils.go 188: Calico CNI releasing IP address ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.253 [INFO][4757] ipam_plugin.go 417: Releasing address using handleID ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.253 [INFO][4757] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.253 [INFO][4757] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.258 [WARNING][4757] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.258 [INFO][4757] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" HandleID="k8s-pod-network.a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Workload="localhost-k8s-csi--node--driver--8vp2t-eth0" Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.259 [INFO][4757] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.264455 containerd[1445]: 2024-09-04 17:39:09.261 [INFO][4748] k8s.go 621: Teardown processing complete. ContainerID="a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544" Sep 4 17:39:09.264868 containerd[1445]: time="2024-09-04T17:39:09.264480810Z" level=info msg="TearDown network for sandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\" successfully" Sep 4 17:39:09.282647 containerd[1445]: time="2024-09-04T17:39:09.282591574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:39:09.282730 containerd[1445]: time="2024-09-04T17:39:09.282693500Z" level=info msg="RemovePodSandbox \"a5a8ae2b8e6db4af6d965b3050f65ee63c7205d5f5fb4c0eead3f422564b3544\" returns successfully" Sep 4 17:39:09.283269 containerd[1445]: time="2024-09-04T17:39:09.283243913Z" level=info msg="StopPodSandbox for \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\"" Sep 4 17:39:09.325245 kubelet[2615]: I0904 17:39:09.325182 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8vp2t" podStartSLOduration=32.761945962 podStartE2EDuration="40.325123964s" podCreationTimestamp="2024-09-04 17:38:29 +0000 UTC" firstStartedPulling="2024-09-04 17:39:00.774447108 +0000 UTC m=+51.762515848" lastFinishedPulling="2024-09-04 17:39:08.33762511 +0000 UTC m=+59.325693850" observedRunningTime="2024-09-04 17:39:09.324772204 +0000 UTC m=+60.312840954" watchObservedRunningTime="2024-09-04 17:39:09.325123964 +0000 UTC m=+60.313192704" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.320 [WARNING][4779] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--v7vtb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b703636-a98c-4502-8b45-5e98626c26a6", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386", Pod:"coredns-76f75df574-v7vtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7d88029c38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.320 [INFO][4779] k8s.go 608: Cleaning up netns ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.320 [INFO][4779] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" iface="eth0" netns="" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.320 [INFO][4779] k8s.go 615: Releasing IP address(es) ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.320 [INFO][4779] utils.go 188: Calico CNI releasing IP address ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.344 [INFO][4788] ipam_plugin.go 417: Releasing address using handleID ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.344 [INFO][4788] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.344 [INFO][4788] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.349 [WARNING][4788] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.349 [INFO][4788] ipam_plugin.go 445: Releasing address using workloadID ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.350 [INFO][4788] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.355589 containerd[1445]: 2024-09-04 17:39:09.353 [INFO][4779] k8s.go 621: Teardown processing complete. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.356303 containerd[1445]: time="2024-09-04T17:39:09.355576804Z" level=info msg="TearDown network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\" successfully" Sep 4 17:39:09.356303 containerd[1445]: time="2024-09-04T17:39:09.355607002Z" level=info msg="StopPodSandbox for \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\" returns successfully" Sep 4 17:39:09.356303 containerd[1445]: time="2024-09-04T17:39:09.356193604Z" level=info msg="RemovePodSandbox for \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\"" Sep 4 17:39:09.356303 containerd[1445]: time="2024-09-04T17:39:09.356219174Z" level=info msg="Forcibly stopping sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\"" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.387 [WARNING][4810] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--v7vtb-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0b703636-a98c-4502-8b45-5e98626c26a6", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f29a082f6c27953b1563a6d7e6dc9f1a431ddf68376cb8469deefb21b8e3e386", Pod:"coredns-76f75df574-v7vtb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7d88029c38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.387 [INFO][4810] k8s.go 608: Cleaning up netns ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.387 [INFO][4810] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" iface="eth0" netns="" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.387 [INFO][4810] k8s.go 615: Releasing IP address(es) ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.387 [INFO][4810] utils.go 188: Calico CNI releasing IP address ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.405 [INFO][4817] ipam_plugin.go 417: Releasing address using handleID ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.405 [INFO][4817] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.405 [INFO][4817] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.411 [WARNING][4817] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.411 [INFO][4817] ipam_plugin.go 445: Releasing address using workloadID ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" HandleID="k8s-pod-network.19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Workload="localhost-k8s-coredns--76f75df574--v7vtb-eth0" Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.412 [INFO][4817] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.417371 containerd[1445]: 2024-09-04 17:39:09.414 [INFO][4810] k8s.go 621: Teardown processing complete. ContainerID="19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469" Sep 4 17:39:09.418054 containerd[1445]: time="2024-09-04T17:39:09.417407690Z" level=info msg="TearDown network for sandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\" successfully" Sep 4 17:39:09.421189 containerd[1445]: time="2024-09-04T17:39:09.421138654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:39:09.421231 containerd[1445]: time="2024-09-04T17:39:09.421200032Z" level=info msg="RemovePodSandbox \"19a3eaad19c8dcf635d54cd2b01a077ab05144c985b9f0db3b1fd096eb8a3469\" returns successfully" Sep 4 17:39:09.422001 containerd[1445]: time="2024-09-04T17:39:09.421750264Z" level=info msg="StopPodSandbox for \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\"" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.459 [WARNING][4840] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hf6kl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"97fc0f1c-4d49-47c0-a204-0725392f4861", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa", Pod:"coredns-76f75df574-hf6kl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali970ce37627a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.459 [INFO][4840] k8s.go 608: Cleaning up netns ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.459 [INFO][4840] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" iface="eth0" netns="" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.459 [INFO][4840] k8s.go 615: Releasing IP address(es) ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.459 [INFO][4840] utils.go 188: Calico CNI releasing IP address ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.481 [INFO][4847] ipam_plugin.go 417: Releasing address using handleID ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.481 [INFO][4847] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.481 [INFO][4847] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.487 [WARNING][4847] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.487 [INFO][4847] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.490 [INFO][4847] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.504525 containerd[1445]: 2024-09-04 17:39:09.499 [INFO][4840] k8s.go 621: Teardown processing complete. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.505693 containerd[1445]: time="2024-09-04T17:39:09.504567596Z" level=info msg="TearDown network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\" successfully" Sep 4 17:39:09.505693 containerd[1445]: time="2024-09-04T17:39:09.504598996Z" level=info msg="StopPodSandbox for \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\" returns successfully" Sep 4 17:39:09.505693 containerd[1445]: time="2024-09-04T17:39:09.505139530Z" level=info msg="RemovePodSandbox for \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\"" Sep 4 17:39:09.505693 containerd[1445]: time="2024-09-04T17:39:09.505192612Z" level=info msg="Forcibly stopping sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\"" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.560 [WARNING][4869] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hf6kl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"97fc0f1c-4d49-47c0-a204-0725392f4861", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8adb37774094ac1b1feff2aa44bd3991974f8e0970ffeca5bdc854f0536a3aa", Pod:"coredns-76f75df574-hf6kl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali970ce37627a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.561 [INFO][4869] k8s.go 608: Cleaning up netns ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.561 [INFO][4869] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" iface="eth0" netns="" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.561 [INFO][4869] k8s.go 615: Releasing IP address(es) ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.561 [INFO][4869] utils.go 188: Calico CNI releasing IP address ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.580 [INFO][4877] ipam_plugin.go 417: Releasing address using handleID ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.580 [INFO][4877] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.580 [INFO][4877] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.585 [WARNING][4877] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.585 [INFO][4877] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" HandleID="k8s-pod-network.6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Workload="localhost-k8s-coredns--76f75df574--hf6kl-eth0" Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.586 [INFO][4877] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.590652 containerd[1445]: 2024-09-04 17:39:09.588 [INFO][4869] k8s.go 621: Teardown processing complete. ContainerID="6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe" Sep 4 17:39:09.591094 containerd[1445]: time="2024-09-04T17:39:09.590700318Z" level=info msg="TearDown network for sandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\" successfully" Sep 4 17:39:09.594453 containerd[1445]: time="2024-09-04T17:39:09.594416753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:39:09.594572 containerd[1445]: time="2024-09-04T17:39:09.594466759Z" level=info msg="RemovePodSandbox \"6fca5dc4fd069c451dcc0ee3a19f764705e9adb797816de37cf23640178997fe\" returns successfully" Sep 4 17:39:09.595061 containerd[1445]: time="2024-09-04T17:39:09.595023955Z" level=info msg="StopPodSandbox for \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\"" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.625 [WARNING][4900] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0", GenerateName:"calico-kube-controllers-77f499bdf-", Namespace:"calico-system", SelfLink:"", UID:"2c22d79b-aa3b-471c-9579-6849476d0d1d", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f499bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9", Pod:"calico-kube-controllers-77f499bdf-lsbg2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ffb9a80ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.626 [INFO][4900] k8s.go 608: Cleaning up netns ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.626 [INFO][4900] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" iface="eth0" netns="" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.626 [INFO][4900] k8s.go 615: Releasing IP address(es) ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.626 [INFO][4900] utils.go 188: Calico CNI releasing IP address ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.646 [INFO][4907] ipam_plugin.go 417: Releasing address using handleID ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.646 [INFO][4907] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.646 [INFO][4907] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.651 [WARNING][4907] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.651 [INFO][4907] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.653 [INFO][4907] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.657757 containerd[1445]: 2024-09-04 17:39:09.655 [INFO][4900] k8s.go 621: Teardown processing complete. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.657757 containerd[1445]: time="2024-09-04T17:39:09.657712457Z" level=info msg="TearDown network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\" successfully" Sep 4 17:39:09.657757 containerd[1445]: time="2024-09-04T17:39:09.657739028Z" level=info msg="StopPodSandbox for \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\" returns successfully" Sep 4 17:39:09.658218 containerd[1445]: time="2024-09-04T17:39:09.658188947Z" level=info msg="RemovePodSandbox for \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\"" Sep 4 17:39:09.658218 containerd[1445]: time="2024-09-04T17:39:09.658210178Z" level=info msg="Forcibly stopping sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\"" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.689 [WARNING][4930] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0", GenerateName:"calico-kube-controllers-77f499bdf-", Namespace:"calico-system", SelfLink:"", UID:"2c22d79b-aa3b-471c-9579-6849476d0d1d", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 38, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f499bdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57b83ca3977e3d8835898de75ade53c8d4426ab7df228b0988e9c3533a69b0e9", Pod:"calico-kube-controllers-77f499bdf-lsbg2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ffb9a80ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.689 [INFO][4930] k8s.go 608: Cleaning up netns ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.689 [INFO][4930] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" iface="eth0" netns="" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.689 [INFO][4930] k8s.go 615: Releasing IP address(es) ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.690 [INFO][4930] utils.go 188: Calico CNI releasing IP address ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.708 [INFO][4937] ipam_plugin.go 417: Releasing address using handleID ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.709 [INFO][4937] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.709 [INFO][4937] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.713 [WARNING][4937] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.713 [INFO][4937] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" HandleID="k8s-pod-network.c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Workload="localhost-k8s-calico--kube--controllers--77f499bdf--lsbg2-eth0" Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.715 [INFO][4937] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:09.719569 containerd[1445]: 2024-09-04 17:39:09.717 [INFO][4930] k8s.go 621: Teardown processing complete. ContainerID="c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250" Sep 4 17:39:09.719964 containerd[1445]: time="2024-09-04T17:39:09.719609441Z" level=info msg="TearDown network for sandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\" successfully" Sep 4 17:39:09.723185 containerd[1445]: time="2024-09-04T17:39:09.723026207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:39:09.723185 containerd[1445]: time="2024-09-04T17:39:09.723082406Z" level=info msg="RemovePodSandbox \"c5257d46b52dc971d51482460e61d27e85e79f701aa8a4c66ea2433e5b2cb250\" returns successfully" Sep 4 17:39:09.956050 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:53168.service - OpenSSH per-connection server daemon (10.0.0.1:53168). Sep 4 17:39:10.016327 sshd[4948]: Accepted publickey for core from 10.0.0.1 port 53168 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:10.018420 sshd[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:10.022927 systemd-logind[1432]: New session 17 of user core. Sep 4 17:39:10.032575 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:39:10.157113 sshd[4948]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:10.161130 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:53168.service: Deactivated successfully. Sep 4 17:39:10.163217 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:39:10.163989 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:39:10.164958 systemd-logind[1432]: Removed session 17. Sep 4 17:39:15.174589 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:53176.service - OpenSSH per-connection server daemon (10.0.0.1:53176). Sep 4 17:39:15.217965 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 53176 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:15.220213 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:15.225431 systemd-logind[1432]: New session 18 of user core. Sep 4 17:39:15.234526 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:39:15.358561 sshd[4984]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:15.363059 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:53176.service: Deactivated successfully. Sep 4 17:39:15.365976 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:39:15.366804 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:39:15.367849 systemd-logind[1432]: Removed session 18. Sep 4 17:39:20.376066 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:57502.service - OpenSSH per-connection server daemon (10.0.0.1:57502). Sep 4 17:39:20.415986 sshd[5009]: Accepted publickey for core from 10.0.0.1 port 57502 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:20.418150 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:20.423823 systemd-logind[1432]: New session 19 of user core. Sep 4 17:39:20.430499 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:39:20.551182 sshd[5009]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:20.556021 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:57502.service: Deactivated successfully. Sep 4 17:39:20.558067 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:39:20.558825 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:39:20.559905 systemd-logind[1432]: Removed session 19. Sep 4 17:39:25.592393 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:57516.service - OpenSSH per-connection server daemon (10.0.0.1:57516). Sep 4 17:39:25.641004 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 57516 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:25.644004 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:25.651561 systemd-logind[1432]: New session 20 of user core. Sep 4 17:39:25.663881 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:39:25.810550 sshd[5027]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:25.817122 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:57516.service: Deactivated successfully. Sep 4 17:39:25.819972 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:39:25.820864 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:39:25.822324 systemd-logind[1432]: Removed session 20. Sep 4 17:39:26.108542 kubelet[2615]: E0904 17:39:26.107717 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:29.973902 kubelet[2615]: I0904 17:39:29.973846 2615 topology_manager.go:215] "Topology Admit Handler" podUID="f3e75584-7b2b-4980-b407-9cb0878e6546" podNamespace="calico-apiserver" podName="calico-apiserver-6bc8c955d4-qxtdv" Sep 4 17:39:29.993400 systemd[1]: Created slice kubepods-besteffort-podf3e75584_7b2b_4980_b407_9cb0878e6546.slice - libcontainer container kubepods-besteffort-podf3e75584_7b2b_4980_b407_9cb0878e6546.slice. Sep 4 17:39:30.131909 kubelet[2615]: I0904 17:39:30.131705 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ckk\" (UniqueName: \"kubernetes.io/projected/f3e75584-7b2b-4980-b407-9cb0878e6546-kube-api-access-f8ckk\") pod \"calico-apiserver-6bc8c955d4-qxtdv\" (UID: \"f3e75584-7b2b-4980-b407-9cb0878e6546\") " pod="calico-apiserver/calico-apiserver-6bc8c955d4-qxtdv" Sep 4 17:39:30.131909 kubelet[2615]: I0904 17:39:30.131777 2615 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3e75584-7b2b-4980-b407-9cb0878e6546-calico-apiserver-certs\") pod \"calico-apiserver-6bc8c955d4-qxtdv\" (UID: \"f3e75584-7b2b-4980-b407-9cb0878e6546\") " pod="calico-apiserver/calico-apiserver-6bc8c955d4-qxtdv" Sep 4 17:39:30.309066 containerd[1445]: time="2024-09-04T17:39:30.309003035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc8c955d4-qxtdv,Uid:f3e75584-7b2b-4980-b407-9cb0878e6546,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:39:30.651562 systemd-networkd[1386]: cali3560abb1c58: Link UP Sep 4 17:39:30.651865 systemd-networkd[1386]: cali3560abb1c58: Gained carrier Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.452 [INFO][5097] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0 calico-apiserver-6bc8c955d4- calico-apiserver f3e75584-7b2b-4980-b407-9cb0878e6546 1106 0 2024-09-04 17:39:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bc8c955d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bc8c955d4-qxtdv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3560abb1c58 [] []}} ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.452 [INFO][5097] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.544 [INFO][5109] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" HandleID="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Workload="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.575 [INFO][5109] ipam_plugin.go 270: Auto assigning IP ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" HandleID="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Workload="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036cbc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bc8c955d4-qxtdv", "timestamp":"2024-09-04 17:39:30.544029872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.575 [INFO][5109] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.575 [INFO][5109] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.575 [INFO][5109] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.579 [INFO][5109] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.592 [INFO][5109] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.603 [INFO][5109] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.608 [INFO][5109] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.612 [INFO][5109] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.612 [INFO][5109] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.624 [INFO][5109] ipam.go 1685: Creating new handle: k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.631 [INFO][5109] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.641 [INFO][5109] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.642 [INFO][5109] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" host="localhost" Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.642 [INFO][5109] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:39:30.681135 containerd[1445]: 2024-09-04 17:39:30.642 [INFO][5109] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" HandleID="k8s-pod-network.a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Workload="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" Sep 4 17:39:30.682034 containerd[1445]: 2024-09-04 17:39:30.646 [INFO][5097] k8s.go 386: Populated endpoint ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0", GenerateName:"calico-apiserver-6bc8c955d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3e75584-7b2b-4980-b407-9cb0878e6546", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 39, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc8c955d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bc8c955d4-qxtdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3560abb1c58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:30.682034 containerd[1445]: 2024-09-04 17:39:30.646 [INFO][5097] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" Sep 4 17:39:30.682034 containerd[1445]: 2024-09-04 17:39:30.646 [INFO][5097] dataplane_linux.go 68: Setting the host side veth name to cali3560abb1c58 ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" Sep 4 17:39:30.682034 containerd[1445]: 2024-09-04 17:39:30.650 [INFO][5097] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" Sep 4 17:39:30.682034 containerd[1445]: 2024-09-04 17:39:30.650 [INFO][5097] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0", GenerateName:"calico-apiserver-6bc8c955d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3e75584-7b2b-4980-b407-9cb0878e6546", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 39, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc8c955d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c", Pod:"calico-apiserver-6bc8c955d4-qxtdv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3560abb1c58", MAC:"96:cb:13:14:69:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:39:30.682034 containerd[1445]: 2024-09-04 17:39:30.673 [INFO][5097] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c" Namespace="calico-apiserver" Pod="calico-apiserver-6bc8c955d4-qxtdv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc8c955d4--qxtdv-eth0" Sep 4 17:39:30.745995 containerd[1445]: time="2024-09-04T17:39:30.745156863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:39:30.745995 containerd[1445]: time="2024-09-04T17:39:30.745245583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:39:30.745995 containerd[1445]: time="2024-09-04T17:39:30.745283685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:30.745995 containerd[1445]: time="2024-09-04T17:39:30.745451075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:39:30.794659 systemd[1]: Started cri-containerd-a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c.scope - libcontainer container a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c. Sep 4 17:39:30.838485 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:42124.service - OpenSSH per-connection server daemon (10.0.0.1:42124). Sep 4 17:39:30.848805 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:39:30.923327 containerd[1445]: time="2024-09-04T17:39:30.922286471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc8c955d4-qxtdv,Uid:f3e75584-7b2b-4980-b407-9cb0878e6546,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c\"" Sep 4 17:39:30.929585 containerd[1445]: time="2024-09-04T17:39:30.927534211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:39:30.953767 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 42124 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:30.959259 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:30.972271 systemd-logind[1432]: New session 21 of user core. Sep 4 17:39:30.989775 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:39:31.333173 sshd[5167]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:31.352228 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:42124.service: Deactivated successfully. Sep 4 17:39:31.357213 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:39:31.359547 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:39:31.380989 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:42126.service - OpenSSH per-connection server daemon (10.0.0.1:42126). Sep 4 17:39:31.393874 systemd-logind[1432]: Removed session 21. Sep 4 17:39:31.448214 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 42126 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:31.451461 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:31.465057 systemd-logind[1432]: New session 22 of user core. Sep 4 17:39:31.514255 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:39:31.940907 systemd-networkd[1386]: cali3560abb1c58: Gained IPv6LL Sep 4 17:39:32.105995 kubelet[2615]: E0904 17:39:32.105847 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:32.406701 sshd[5189]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:32.421386 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:42126.service: Deactivated successfully. Sep 4 17:39:32.436110 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:39:32.452161 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:39:32.473250 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:42136.service - OpenSSH per-connection server daemon (10.0.0.1:42136). Sep 4 17:39:32.481238 systemd-logind[1432]: Removed session 22. Sep 4 17:39:32.628108 sshd[5201]: Accepted publickey for core from 10.0.0.1 port 42136 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:32.630415 sshd[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:32.669611 systemd-logind[1432]: New session 23 of user core. Sep 4 17:39:32.684843 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:39:33.111721 kubelet[2615]: E0904 17:39:33.111222 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:36.529583 sshd[5201]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:36.573641 containerd[1445]: time="2024-09-04T17:39:36.572785099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:36.591822 containerd[1445]: time="2024-09-04T17:39:36.573918776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:39:36.591822 containerd[1445]: time="2024-09-04T17:39:36.581617119Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:36.588558 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:42150.service - OpenSSH per-connection server daemon (10.0.0.1:42150). Sep 4 17:39:36.589445 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:42136.service: Deactivated successfully. Sep 4 17:39:36.593362 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:39:36.602023 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:39:36.611892 containerd[1445]: time="2024-09-04T17:39:36.608475629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:39:36.624504 containerd[1445]: time="2024-09-04T17:39:36.619463572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 5.69187073s" Sep 4 17:39:36.624504 containerd[1445]: time="2024-09-04T17:39:36.619520631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:39:36.635381 containerd[1445]: time="2024-09-04T17:39:36.631255034Z" level=info msg="CreateContainer within sandbox \"a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:39:36.634847 systemd-logind[1432]: Removed session 23. Sep 4 17:39:36.720633 containerd[1445]: time="2024-09-04T17:39:36.720539620Z" level=info msg="CreateContainer within sandbox \"a9fc9aa5d24f54ad0361c8afc7f1484335c442ad8eca639ea2962ca86314819c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f81a38777399dab26cebe9dd576e14ea0c797cb72abfecf45cd4a64d54a1029f\"" Sep 4 17:39:36.723451 containerd[1445]: time="2024-09-04T17:39:36.721877377Z" level=info msg="StartContainer for \"f81a38777399dab26cebe9dd576e14ea0c797cb72abfecf45cd4a64d54a1029f\"" Sep 4 17:39:36.738481 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 42150 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:36.752975 sshd[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:36.798491 systemd-logind[1432]: New session 24 of user core. Sep 4 17:39:36.826594 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:39:36.927085 systemd[1]: Started cri-containerd-f81a38777399dab26cebe9dd576e14ea0c797cb72abfecf45cd4a64d54a1029f.scope - libcontainer container f81a38777399dab26cebe9dd576e14ea0c797cb72abfecf45cd4a64d54a1029f. Sep 4 17:39:37.084026 containerd[1445]: time="2024-09-04T17:39:37.083864766Z" level=info msg="StartContainer for \"f81a38777399dab26cebe9dd576e14ea0c797cb72abfecf45cd4a64d54a1029f\" returns successfully" Sep 4 17:39:37.424562 sshd[5236]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:37.451293 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:42150.service: Deactivated successfully. Sep 4 17:39:37.458962 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:39:37.464770 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:39:37.487942 kubelet[2615]: I0904 17:39:37.487876 2615 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bc8c955d4-qxtdv" podStartSLOduration=2.794094449 podStartE2EDuration="8.487806676s" podCreationTimestamp="2024-09-04 17:39:29 +0000 UTC" firstStartedPulling="2024-09-04 17:39:30.926795562 +0000 UTC m=+81.914864312" lastFinishedPulling="2024-09-04 17:39:36.620507799 +0000 UTC m=+87.608576539" observedRunningTime="2024-09-04 17:39:37.486456758 +0000 UTC m=+88.474525498" watchObservedRunningTime="2024-09-04 17:39:37.487806676 +0000 UTC m=+88.475875416" Sep 4 17:39:37.495933 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:42158.service - OpenSSH per-connection server daemon (10.0.0.1:42158). Sep 4 17:39:37.502208 systemd-logind[1432]: Removed session 24. Sep 4 17:39:37.615313 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 42158 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:37.616125 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:37.643985 systemd-logind[1432]: New session 25 of user core. Sep 4 17:39:37.658218 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:39:38.151167 sshd[5294]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:38.169052 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:42158.service: Deactivated successfully. Sep 4 17:39:38.177198 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:39:38.184522 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:39:38.186294 systemd-logind[1432]: Removed session 25. Sep 4 17:39:39.740001 kernel: hrtimer: interrupt took 6003275 ns Sep 4 17:39:43.228816 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:52806.service - OpenSSH per-connection server daemon (10.0.0.1:52806). Sep 4 17:39:43.364917 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 52806 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:43.368478 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:43.401878 systemd-logind[1432]: New session 26 of user core. Sep 4 17:39:43.442412 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:39:43.791729 sshd[5318]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:43.800624 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:52806.service: Deactivated successfully. Sep 4 17:39:43.809075 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:39:43.815967 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:39:43.824961 systemd-logind[1432]: Removed session 26. Sep 4 17:39:47.110972 kubelet[2615]: E0904 17:39:47.107680 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:48.107282 kubelet[2615]: E0904 17:39:48.105775 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:48.831454 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:47980.service - OpenSSH per-connection server daemon (10.0.0.1:47980). Sep 4 17:39:48.897257 sshd[5358]: Accepted publickey for core from 10.0.0.1 port 47980 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:48.899622 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:48.921779 systemd-logind[1432]: New session 27 of user core. Sep 4 17:39:48.938451 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:39:49.307042 sshd[5358]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:49.320977 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:47980.service: Deactivated successfully. Sep 4 17:39:49.328660 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:39:49.337729 systemd-logind[1432]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:39:49.344091 systemd-logind[1432]: Removed session 27. Sep 4 17:39:51.112079 kubelet[2615]: E0904 17:39:51.111565 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:39:54.347234 systemd[1]: Started sshd@27-10.0.0.117:22-10.0.0.1:47990.service - OpenSSH per-connection server daemon (10.0.0.1:47990). Sep 4 17:39:54.456090 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 47990 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:54.457685 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:54.483810 systemd-logind[1432]: New session 28 of user core. Sep 4 17:39:54.502689 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:39:54.726910 sshd[5378]: pam_unix(sshd:session): session closed for user core Sep 4 17:39:54.736016 systemd[1]: sshd@27-10.0.0.117:22-10.0.0.1:47990.service: Deactivated successfully. Sep 4 17:39:54.739798 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:39:54.744765 systemd-logind[1432]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:39:54.756523 systemd-logind[1432]: Removed session 28. Sep 4 17:39:59.771228 systemd[1]: Started sshd@28-10.0.0.117:22-10.0.0.1:56450.service - OpenSSH per-connection server daemon (10.0.0.1:56450). Sep 4 17:39:59.850477 sshd[5421]: Accepted publickey for core from 10.0.0.1 port 56450 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:39:59.858704 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:39:59.873208 systemd-logind[1432]: New session 29 of user core. Sep 4 17:39:59.881740 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 17:40:00.117946 sshd[5421]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:00.133438 systemd[1]: sshd@28-10.0.0.117:22-10.0.0.1:56450.service: Deactivated successfully. Sep 4 17:40:00.141463 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 17:40:00.142719 systemd-logind[1432]: Session 29 logged out. Waiting for processes to exit. Sep 4 17:40:00.152975 systemd-logind[1432]: Removed session 29. Sep 4 17:40:05.148144 systemd[1]: Started sshd@29-10.0.0.117:22-10.0.0.1:56466.service - OpenSSH per-connection server daemon (10.0.0.1:56466). Sep 4 17:40:05.240243 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 56466 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:40:05.244160 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:05.264708 systemd-logind[1432]: New session 30 of user core. Sep 4 17:40:05.281682 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 17:40:05.576677 sshd[5438]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:05.587493 systemd[1]: sshd@29-10.0.0.117:22-10.0.0.1:56466.service: Deactivated successfully. Sep 4 17:40:05.597924 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 17:40:05.613486 systemd-logind[1432]: Session 30 logged out. Waiting for processes to exit. Sep 4 17:40:05.616123 systemd-logind[1432]: Removed session 30. Sep 4 17:40:10.618957 systemd[1]: Started sshd@30-10.0.0.117:22-10.0.0.1:55836.service - OpenSSH per-connection server daemon (10.0.0.1:55836). Sep 4 17:40:10.697381 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 55836 ssh2: RSA SHA256:RhHm0dPfiJgmyyWO13c0dqD54//t8+Uf0Z3xUep79MQ Sep 4 17:40:10.702632 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:40:10.721226 systemd-logind[1432]: New session 31 of user core. Sep 4 17:40:10.734713 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 4 17:40:11.056275 sshd[5460]: pam_unix(sshd:session): session closed for user core Sep 4 17:40:11.076125 systemd[1]: sshd@30-10.0.0.117:22-10.0.0.1:55836.service: Deactivated successfully. Sep 4 17:40:11.080003 systemd[1]: session-31.scope: Deactivated successfully. Sep 4 17:40:11.100999 systemd-logind[1432]: Session 31 logged out. Waiting for processes to exit. Sep 4 17:40:11.109513 systemd-logind[1432]: Removed session 31.