Oct 9 00:57:32.960300 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 00:57:32.960329 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:57:32.960345 kernel: BIOS-provided physical RAM map: Oct 9 00:57:32.960354 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 00:57:32.960362 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 00:57:32.960371 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 00:57:32.960382 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 9 00:57:32.960391 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 9 00:57:32.960400 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 00:57:32.960412 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 00:57:32.960439 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 00:57:32.960449 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 00:57:32.960457 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 9 00:57:32.960492 kernel: NX (Execute Disable) protection: active Oct 9 00:57:32.960515 kernel: APIC: Static calls initialized Oct 9 00:57:32.960551 kernel: SMBIOS 2.8 present. Oct 9 00:57:32.960592 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 9 00:57:32.960604 kernel: Hypervisor detected: KVM Oct 9 00:57:32.960630 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 00:57:32.960642 kernel: kvm-clock: using sched offset of 3179697488 cycles Oct 9 00:57:32.960653 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 00:57:32.960663 kernel: tsc: Detected 2794.748 MHz processor Oct 9 00:57:32.960673 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 00:57:32.960684 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 00:57:32.960694 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 9 00:57:32.960709 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 00:57:32.960719 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 00:57:32.960729 kernel: Using GB pages for direct mapping Oct 9 00:57:32.960739 kernel: ACPI: Early table checksum verification disabled Oct 9 00:57:32.960749 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 9 00:57:32.960758 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:32.960768 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:32.960778 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:32.960792 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 9 00:57:32.960801 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:32.960811 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:32.960822 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:32.960832 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:32.960842 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Oct 9 00:57:32.960852 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Oct 9 00:57:32.960872 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 9 00:57:32.960885 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Oct 9 00:57:32.960895 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Oct 9 00:57:32.960906 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Oct 9 00:57:32.960917 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Oct 9 00:57:32.960927 kernel: No NUMA configuration found Oct 9 00:57:32.960937 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 9 00:57:32.960948 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 9 00:57:32.960961 kernel: Zone ranges: Oct 9 00:57:32.960972 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 00:57:32.960982 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 9 00:57:32.960993 kernel: Normal empty Oct 9 00:57:32.961003 kernel: Movable zone start for each node Oct 9 00:57:32.961014 kernel: Early memory node ranges Oct 9 00:57:32.961024 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 00:57:32.961034 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 9 00:57:32.961045 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 9 00:57:32.961061 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 00:57:32.961072 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 00:57:32.961083 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 9 00:57:32.961093 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 00:57:32.961104 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 00:57:32.961114 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 00:57:32.961125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 00:57:32.961135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 00:57:32.961146 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 00:57:32.961160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 00:57:32.961170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 00:57:32.961181 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 00:57:32.961191 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 00:57:32.961201 kernel: TSC deadline timer available Oct 9 00:57:32.961212 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 00:57:32.961222 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 00:57:32.961233 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 00:57:32.961243 kernel: kvm-guest: setup PV sched yield Oct 9 00:57:32.961261 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 00:57:32.961271 kernel: Booting paravirtualized kernel on KVM Oct 9 00:57:32.961281 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 00:57:32.961303 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 00:57:32.961314 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 00:57:32.961324 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 00:57:32.961334 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 00:57:32.961344 kernel: kvm-guest: PV spinlocks enabled Oct 9 00:57:32.961355 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 00:57:32.961371 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:57:32.961382 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 00:57:32.961392 kernel: random: crng init done Oct 9 00:57:32.961403 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 00:57:32.961414 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 00:57:32.961424 kernel: Fallback order for Node 0: 0 Oct 9 00:57:32.961434 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 9 00:57:32.961445 kernel: Policy zone: DMA32 Oct 9 00:57:32.961458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 00:57:32.961504 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 136900K reserved, 0K cma-reserved) Oct 9 00:57:32.961515 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 00:57:32.961525 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 00:57:32.961535 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 00:57:32.961546 kernel: Dynamic Preempt: voluntary Oct 9 00:57:32.961556 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 00:57:32.961568 kernel: rcu: RCU event tracing is enabled. Oct 9 00:57:32.961579 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 00:57:32.961593 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 00:57:32.961604 kernel: Rude variant of Tasks RCU enabled. Oct 9 00:57:32.961614 kernel: Tracing variant of Tasks RCU enabled. Oct 9 00:57:32.961629 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 00:57:32.961639 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 00:57:32.961650 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 00:57:32.961660 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 00:57:32.961671 kernel: Console: colour VGA+ 80x25 Oct 9 00:57:32.961681 kernel: printk: console [ttyS0] enabled Oct 9 00:57:32.961692 kernel: ACPI: Core revision 20230628 Oct 9 00:57:32.961706 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 00:57:32.961716 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 00:57:32.961726 kernel: x2apic enabled Oct 9 00:57:32.961737 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 00:57:32.961747 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 00:57:32.961757 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 00:57:32.961768 kernel: kvm-guest: setup PV IPIs Oct 9 00:57:32.961791 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 00:57:32.961802 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 00:57:32.961813 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 9 00:57:32.961823 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 00:57:32.961838 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 00:57:32.961849 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 00:57:32.961860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 00:57:32.961871 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 00:57:32.961882 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 00:57:32.961896 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 00:57:32.961907 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 00:57:32.961918 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 00:57:32.961933 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 00:57:32.961944 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 00:57:32.961955 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 00:57:32.961967 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 00:57:32.961978 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 00:57:32.961992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 00:57:32.962003 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 00:57:32.962014 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 00:57:32.962025 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 00:57:32.962041 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 00:57:32.962069 kernel: Freeing SMP alternatives memory: 32K Oct 9 00:57:32.962091 kernel: pid_max: default: 32768 minimum: 301 Oct 9 00:57:32.962117 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 00:57:32.962139 kernel: landlock: Up and running. Oct 9 00:57:32.962173 kernel: SELinux: Initializing. Oct 9 00:57:32.962195 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:57:32.962221 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:57:32.962232 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 00:57:32.962243 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:57:32.962254 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:57:32.962269 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:57:32.962280 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 00:57:32.962300 kernel: ... version: 0 Oct 9 00:57:32.962314 kernel: ... bit width: 48 Oct 9 00:57:32.962325 kernel: ... generic registers: 6 Oct 9 00:57:32.962336 kernel: ... value mask: 0000ffffffffffff Oct 9 00:57:32.962347 kernel: ... max period: 00007fffffffffff Oct 9 00:57:32.962358 kernel: ... fixed-purpose events: 0 Oct 9 00:57:32.962369 kernel: ... event mask: 000000000000003f Oct 9 00:57:32.962379 kernel: signal: max sigframe size: 1776 Oct 9 00:57:32.962390 kernel: rcu: Hierarchical SRCU implementation. Oct 9 00:57:32.962401 kernel: rcu: Max phase no-delay instances is 400. Oct 9 00:57:32.962415 kernel: smp: Bringing up secondary CPUs ... Oct 9 00:57:32.962426 kernel: smpboot: x86: Booting SMP configuration: Oct 9 00:57:32.962437 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 00:57:32.962448 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 00:57:32.962459 kernel: smpboot: Max logical packages: 1 Oct 9 00:57:32.962493 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 9 00:57:32.962504 kernel: devtmpfs: initialized Oct 9 00:57:32.962515 kernel: x86/mm: Memory block size: 128MB Oct 9 00:57:32.962526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 00:57:32.962537 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 00:57:32.962551 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 00:57:32.962575 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 00:57:32.962586 kernel: audit: initializing netlink subsys (disabled) Oct 9 00:57:32.962597 kernel: audit: type=2000 audit(1728435451.792:1): state=initialized audit_enabled=0 res=1 Oct 9 00:57:32.962626 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 00:57:32.962637 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 00:57:32.962648 kernel: cpuidle: using governor menu Oct 9 00:57:32.962659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 00:57:32.962670 kernel: dca service started, version 1.12.1 Oct 9 00:57:32.962685 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 00:57:32.962696 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 00:57:32.962710 kernel: PCI: Using configuration type 1 for base access Oct 9 00:57:32.962722 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 00:57:32.962734 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 00:57:32.962745 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 00:57:32.962756 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 00:57:32.962770 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 00:57:32.962784 kernel: ACPI: Added _OSI(Module Device) Oct 9 00:57:32.962795 kernel: ACPI: Added _OSI(Processor Device) Oct 9 00:57:32.962806 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 00:57:32.962817 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 00:57:32.962828 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 00:57:32.962839 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 00:57:32.962850 kernel: ACPI: Interpreter enabled Oct 9 00:57:32.962861 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 00:57:32.962872 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 00:57:32.962883 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 00:57:32.962897 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 00:57:32.962908 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 00:57:32.962919 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 00:57:32.963160 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 00:57:32.963347 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 00:57:32.963544 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 00:57:32.963561 kernel: PCI host bridge to bus 0000:00 Oct 9 00:57:32.963739 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 00:57:32.963888 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 00:57:32.964041 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 00:57:32.964330 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 00:57:32.964516 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 00:57:32.964664 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 9 00:57:32.964814 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 00:57:32.965010 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 00:57:32.965197 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 00:57:32.965376 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 9 00:57:32.965617 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 9 00:57:32.965795 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 9 00:57:32.965967 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 00:57:32.966152 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 00:57:32.966330 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 9 00:57:32.966523 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 9 00:57:32.966689 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 9 00:57:32.966872 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 00:57:32.967046 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 00:57:32.967217 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 9 00:57:32.967407 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 9 00:57:32.967609 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 00:57:32.967781 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 9 00:57:32.967950 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 9 00:57:32.968117 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 9 00:57:32.968284 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 9 00:57:32.968487 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 00:57:32.968662 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 00:57:32.968845 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 00:57:32.969006 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 9 00:57:32.969166 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 9 00:57:32.969364 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 00:57:32.969616 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 00:57:32.969639 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 00:57:32.969650 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 00:57:32.969662 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 00:57:32.969672 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 00:57:32.969683 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 00:57:32.969694 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 00:57:32.969705 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 00:57:32.969716 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 00:57:32.969727 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 00:57:32.969742 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 00:57:32.969752 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 00:57:32.969763 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 00:57:32.969774 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 00:57:32.969784 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 00:57:32.969794 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 00:57:32.969805 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 00:57:32.969816 kernel: iommu: Default domain type: Translated Oct 9 00:57:32.969827 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 00:57:32.969842 kernel: PCI: Using ACPI for IRQ routing Oct 9 00:57:32.969854 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 00:57:32.969864 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 00:57:32.969876 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 9 00:57:32.970042 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 00:57:32.970201 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 00:57:32.970372 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 00:57:32.970387 kernel: vgaarb: loaded Oct 9 00:57:32.970398 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 00:57:32.970414 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 00:57:32.970425 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 00:57:32.970436 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 00:57:32.970447 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 00:57:32.970458 kernel: pnp: PnP ACPI init Oct 9 00:57:32.970650 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 00:57:32.970667 kernel: pnp: PnP ACPI: found 6 devices Oct 9 00:57:32.970678 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 00:57:32.970695 kernel: NET: Registered PF_INET protocol family Oct 9 00:57:32.970706 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 00:57:32.970717 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 00:57:32.970728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 00:57:32.970740 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 00:57:32.970750 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 00:57:32.970761 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 00:57:32.970772 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:57:32.970787 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:57:32.970798 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 00:57:32.970809 kernel: NET: Registered PF_XDP protocol family Oct 9 00:57:32.970962 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 00:57:32.971109 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 00:57:32.971258 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 00:57:32.971421 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 00:57:32.971594 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 00:57:32.971747 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 9 00:57:32.971767 kernel: PCI: CLS 0 bytes, default 64 Oct 9 00:57:32.971778 kernel: Initialise system trusted keyrings Oct 9 00:57:32.971789 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 00:57:32.971800 kernel: Key type asymmetric registered Oct 9 00:57:32.971811 kernel: Asymmetric key parser 'x509' registered Oct 9 00:57:32.971821 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 00:57:32.971832 kernel: io scheduler mq-deadline registered Oct 9 00:57:32.971844 kernel: io scheduler kyber registered Oct 9 00:57:32.971854 kernel: io scheduler bfq registered Oct 9 00:57:32.971869 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 00:57:32.971881 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 00:57:32.971892 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 00:57:32.971903 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 00:57:32.971915 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 00:57:32.971926 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 00:57:32.971937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 00:57:32.971948 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 00:57:32.971959 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 00:57:32.972129 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 00:57:32.972145 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 00:57:32.972312 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 00:57:32.972491 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T00:57:32 UTC (1728435452) Oct 9 00:57:32.972651 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 00:57:32.972665 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 00:57:32.972676 kernel: NET: Registered PF_INET6 protocol family Oct 9 00:57:32.972687 kernel: Segment Routing with IPv6 Oct 9 00:57:32.972704 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 00:57:32.972715 kernel: NET: Registered PF_PACKET protocol family Oct 9 00:57:32.972725 kernel: Key type dns_resolver registered Oct 9 00:57:32.972736 kernel: IPI shorthand broadcast: enabled Oct 9 00:57:32.972747 kernel: sched_clock: Marking stable (767003998, 105921623)->(931324897, -58399276) Oct 9 00:57:32.972758 kernel: registered taskstats version 1 Oct 9 00:57:32.972768 kernel: Loading compiled-in X.509 certificates Oct 9 00:57:32.972779 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 00:57:32.972790 kernel: Key type .fscrypt registered Oct 9 00:57:32.972804 kernel: Key type fscrypt-provisioning registered Oct 9 00:57:32.972815 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 00:57:32.972826 kernel: ima: Allocated hash algorithm: sha1 Oct 9 00:57:32.972838 kernel: ima: No architecture policies found Oct 9 00:57:32.972849 kernel: clk: Disabling unused clocks Oct 9 00:57:32.972860 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 00:57:32.972870 kernel: Write protecting the kernel read-only data: 36864k Oct 9 00:57:32.972882 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 00:57:32.972892 kernel: Run /init as init process Oct 9 00:57:32.972906 kernel: with arguments: Oct 9 00:57:32.972917 kernel: /init Oct 9 00:57:32.972928 kernel: with environment: Oct 9 00:57:32.972939 kernel: HOME=/ Oct 9 00:57:32.972949 kernel: TERM=linux Oct 9 00:57:32.972960 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 00:57:32.972973 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:57:32.972986 systemd[1]: Detected virtualization kvm. Oct 9 00:57:32.973001 systemd[1]: Detected architecture x86-64. Oct 9 00:57:32.973012 systemd[1]: Running in initrd. Oct 9 00:57:32.973024 systemd[1]: No hostname configured, using default hostname. Oct 9 00:57:32.973035 systemd[1]: Hostname set to . Oct 9 00:57:32.973047 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:57:32.973058 systemd[1]: Queued start job for default target initrd.target. Oct 9 00:57:32.973070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:57:32.973082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:57:32.973098 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 00:57:32.973125 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:57:32.973140 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 00:57:32.973153 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 00:57:32.973170 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 00:57:32.973186 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 00:57:32.973198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:57:32.973210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:57:32.973222 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:57:32.973233 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:57:32.973245 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:57:32.973257 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:57:32.973269 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:57:32.973284 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:57:32.973307 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:57:32.973319 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:57:32.973331 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:57:32.973343 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:57:32.973355 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:57:32.973367 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:57:32.973379 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 00:57:32.973395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:57:32.973407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 00:57:32.973419 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 00:57:32.973431 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:57:32.973443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:57:32.973455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:32.973532 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 00:57:32.973545 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:57:32.973557 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 00:57:32.973575 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:57:32.973612 systemd-journald[193]: Collecting audit messages is disabled. Oct 9 00:57:32.973644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:57:32.973657 systemd-journald[193]: Journal started Oct 9 00:57:32.973685 systemd-journald[193]: Runtime Journal (/run/log/journal/09e9a8314f5a458a9d0a9580689d5851) is 6.0M, max 48.4M, 42.3M free. Oct 9 00:57:32.962671 systemd-modules-load[194]: Inserted module 'overlay' Oct 9 00:57:33.001034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 00:57:33.001054 kernel: Bridge firewalling registered Oct 9 00:57:32.996409 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 9 00:57:33.002599 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:57:33.003306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:57:33.014731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:57:33.017779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:57:33.021637 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:57:33.024426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:33.026125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:57:33.030430 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:57:33.034583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:57:33.044830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:57:33.047005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:57:33.059268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:33.063647 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 00:57:33.080211 dracut-cmdline[231]: dracut-dracut-053 Oct 9 00:57:33.084172 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:57:33.090486 systemd-resolved[220]: Positive Trust Anchors: Oct 9 00:57:33.090506 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:57:33.090538 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:57:33.093674 systemd-resolved[220]: Defaulting to hostname 'linux'. Oct 9 00:57:33.094917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:57:33.100460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:57:33.197522 kernel: SCSI subsystem initialized Oct 9 00:57:33.208513 kernel: Loading iSCSI transport class v2.0-870. Oct 9 00:57:33.221510 kernel: iscsi: registered transport (tcp) Oct 9 00:57:33.243680 kernel: iscsi: registered transport (qla4xxx) Oct 9 00:57:33.243779 kernel: QLogic iSCSI HBA Driver Oct 9 00:57:33.302961 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 00:57:33.311759 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 00:57:33.336914 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 00:57:33.336984 kernel: device-mapper: uevent: version 1.0.3 Oct 9 00:57:33.337979 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 00:57:33.385527 kernel: raid6: avx2x4 gen() 22378 MB/s Oct 9 00:57:33.402497 kernel: raid6: avx2x2 gen() 21707 MB/s Oct 9 00:57:33.419792 kernel: raid6: avx2x1 gen() 20261 MB/s Oct 9 00:57:33.419808 kernel: raid6: using algorithm avx2x4 gen() 22378 MB/s Oct 9 00:57:33.437657 kernel: raid6: .... xor() 6317 MB/s, rmw enabled Oct 9 00:57:33.437731 kernel: raid6: using avx2x2 recovery algorithm Oct 9 00:57:33.462503 kernel: xor: automatically using best checksumming function avx Oct 9 00:57:33.693530 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 00:57:33.708846 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:57:33.716814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:57:33.731139 systemd-udevd[413]: Using default interface naming scheme 'v255'. Oct 9 00:57:33.735968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:57:33.747681 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 00:57:33.764586 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Oct 9 00:57:33.805121 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:57:33.817629 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:57:33.882899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:57:33.893715 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 00:57:33.906986 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 00:57:33.910850 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:57:33.912272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:57:33.913535 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:57:33.922867 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 00:57:33.930502 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 00:57:33.937491 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 00:57:33.939602 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 00:57:33.941061 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:57:33.947137 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 00:57:33.947162 kernel: GPT:9289727 != 19775487 Oct 9 00:57:33.947172 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 00:57:33.947182 kernel: GPT:9289727 != 19775487 Oct 9 00:57:33.947192 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:57:33.947201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:33.948850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:57:33.948970 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:33.953336 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:57:33.956275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:57:33.956750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:33.959151 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:33.967486 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 00:57:33.970223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:33.971429 kernel: AES CTR mode by8 optimization enabled Oct 9 00:57:33.971451 kernel: libata version 3.00 loaded. Oct 9 00:57:34.004541 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (476) Oct 9 00:57:34.004603 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Oct 9 00:57:34.004623 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 00:57:34.005693 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 00:57:34.009685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 00:57:34.064755 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 00:57:34.065059 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 00:57:34.065285 kernel: scsi host0: ahci Oct 9 00:57:34.065540 kernel: scsi host1: ahci Oct 9 00:57:34.065767 kernel: scsi host2: ahci Oct 9 00:57:34.066003 kernel: scsi host3: ahci Oct 9 00:57:34.066205 kernel: scsi host4: ahci Oct 9 00:57:34.066421 kernel: scsi host5: ahci Oct 9 00:57:34.066650 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 9 00:57:34.066665 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 9 00:57:34.066678 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 9 00:57:34.066698 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 9 00:57:34.066714 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 9 00:57:34.066724 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 9 00:57:34.068785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:34.074161 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 00:57:34.091506 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 00:57:34.092818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 00:57:34.097420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:57:34.106819 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 00:57:34.109164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:57:34.140080 disk-uuid[556]: Primary Header is updated. Oct 9 00:57:34.140080 disk-uuid[556]: Secondary Entries is updated. Oct 9 00:57:34.140080 disk-uuid[556]: Secondary Header is updated. Oct 9 00:57:34.150298 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:34.150319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:34.170376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:34.324840 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:34.324941 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:34.324954 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:34.326501 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:34.326592 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 00:57:34.327491 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:34.328495 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 00:57:34.329624 kernel: ata3.00: applying bridge limits Oct 9 00:57:34.329637 kernel: ata3.00: configured for UDMA/100 Oct 9 00:57:34.330545 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 00:57:34.379043 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 00:57:34.379486 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 00:57:34.393496 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 00:57:35.147497 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:35.148341 disk-uuid[557]: The operation has completed successfully. Oct 9 00:57:35.178656 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 00:57:35.178784 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 00:57:35.201698 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 00:57:35.208251 sh[592]: Success Oct 9 00:57:35.221490 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 00:57:35.258505 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 00:57:35.274140 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 00:57:35.277124 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 00:57:35.290337 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 00:57:35.290395 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:35.290407 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 00:57:35.291354 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 00:57:35.292083 kernel: BTRFS info (device dm-0): using free space tree Oct 9 00:57:35.297302 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 00:57:35.298962 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 00:57:35.306619 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 00:57:35.308309 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 00:57:35.320141 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:35.320173 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:35.320185 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:57:35.323515 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:57:35.333534 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 00:57:35.334967 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:35.364440 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 00:57:35.371634 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 00:57:35.436801 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:57:35.441704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:57:35.525341 systemd-networkd[773]: lo: Link UP Oct 9 00:57:35.525717 systemd-networkd[773]: lo: Gained carrier Oct 9 00:57:35.527778 systemd-networkd[773]: Enumeration completed Oct 9 00:57:35.527899 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:57:35.528322 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:35.528326 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:57:35.529643 systemd-networkd[773]: eth0: Link UP Oct 9 00:57:35.529647 systemd-networkd[773]: eth0: Gained carrier Oct 9 00:57:35.529654 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:35.530723 systemd[1]: Reached target network.target - Network. Oct 9 00:57:35.596721 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:57:35.598904 ignition[707]: Ignition 2.19.0 Oct 9 00:57:35.598918 ignition[707]: Stage: fetch-offline Oct 9 00:57:35.598983 ignition[707]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:35.598995 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:35.599157 ignition[707]: parsed url from cmdline: "" Oct 9 00:57:35.599162 ignition[707]: no config URL provided Oct 9 00:57:35.599167 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:57:35.599177 ignition[707]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:57:35.599212 ignition[707]: op(1): [started] loading QEMU firmware config module Oct 9 00:57:35.599218 ignition[707]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 00:57:35.610982 ignition[707]: op(1): [finished] loading QEMU firmware config module Oct 9 00:57:35.650603 ignition[707]: parsing config with SHA512: 028c91c17ddf6e0ceab4b34fda23820c2e4307085d6e8e3673e46d5d50989aef62429a0bd155c1cb3ea14ce36a6a6573c7b34e9154df33784df6e10532fb4b8a Oct 9 00:57:35.654219 unknown[707]: fetched base config from "system" Oct 9 00:57:35.654245 unknown[707]: fetched user config from "qemu" Oct 9 00:57:35.654618 ignition[707]: fetch-offline: fetch-offline passed Oct 9 00:57:35.654684 ignition[707]: Ignition finished successfully Oct 9 00:57:35.658298 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:57:35.660951 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 00:57:35.667912 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 00:57:35.773760 ignition[785]: Ignition 2.19.0 Oct 9 00:57:35.773773 ignition[785]: Stage: kargs Oct 9 00:57:35.773957 ignition[785]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:35.773971 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:35.774898 ignition[785]: kargs: kargs passed Oct 9 00:57:35.774948 ignition[785]: Ignition finished successfully Oct 9 00:57:35.779193 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 00:57:35.785776 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 00:57:35.806569 ignition[793]: Ignition 2.19.0 Oct 9 00:57:35.806581 ignition[793]: Stage: disks Oct 9 00:57:35.806740 ignition[793]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:35.806751 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:35.807635 ignition[793]: disks: disks passed Oct 9 00:57:35.807683 ignition[793]: Ignition finished successfully Oct 9 00:57:35.814512 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 00:57:35.819275 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 00:57:35.819359 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:57:35.821687 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:57:35.822023 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:57:35.822368 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:57:35.838628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 00:57:35.865223 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 00:57:35.872458 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 00:57:35.883808 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 00:57:35.973507 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 00:57:35.973967 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 00:57:35.974874 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 00:57:35.988630 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:57:35.991932 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 00:57:35.992418 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 00:57:35.992502 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 00:57:35.992531 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:57:36.005389 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 00:57:36.007191 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Oct 9 00:57:36.006890 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 00:57:36.011766 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:36.011783 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:36.011794 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:57:36.014485 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:57:36.016687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:57:36.050268 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 00:57:36.055560 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Oct 9 00:57:36.059523 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 00:57:36.063836 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 00:57:36.161014 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 00:57:36.173556 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 00:57:36.175218 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 00:57:36.182493 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:36.199156 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 00:57:36.204124 ignition[924]: INFO : Ignition 2.19.0 Oct 9 00:57:36.204124 ignition[924]: INFO : Stage: mount Oct 9 00:57:36.205970 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:36.205970 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:36.205970 ignition[924]: INFO : mount: mount passed Oct 9 00:57:36.205970 ignition[924]: INFO : Ignition finished successfully Oct 9 00:57:36.207311 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 00:57:36.214669 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 00:57:36.289623 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 00:57:36.306804 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:57:36.315097 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Oct 9 00:57:36.315125 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:36.315136 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:36.316588 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:57:36.319489 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:57:36.320794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:57:36.367629 ignition[956]: INFO : Ignition 2.19.0 Oct 9 00:57:36.367629 ignition[956]: INFO : Stage: files Oct 9 00:57:36.369758 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:36.369758 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:36.369758 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Oct 9 00:57:36.373605 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 00:57:36.373605 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 00:57:36.373605 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 00:57:36.373605 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 00:57:36.379573 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 00:57:36.379573 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:57:36.379573 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 00:57:36.373832 unknown[956]: wrote ssh authorized keys file for user: core Oct 9 00:57:36.417381 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 00:57:36.642584 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:57:36.642584 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 00:57:36.646494 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 9 00:57:36.909702 systemd-networkd[773]: eth0: Gained IPv6LL Oct 9 00:57:37.012456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 00:57:37.674076 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 00:57:37.674076 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 00:57:37.678241 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:57:37.680225 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:57:37.680225 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 00:57:37.680225 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 00:57:37.680225 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:57:37.680225 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:57:37.680225 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 00:57:37.680225 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 00:57:37.715611 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:57:37.720192 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:57:37.744531 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 00:57:37.746161 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 9 00:57:37.746161 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 00:57:37.748918 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:57:37.748918 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:57:37.748918 ignition[956]: INFO : files: files passed Oct 9 00:57:37.748918 ignition[956]: INFO : Ignition finished successfully Oct 9 00:57:37.756679 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 00:57:37.770589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 00:57:37.773118 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 00:57:37.774001 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 00:57:37.774113 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 00:57:37.790574 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 00:57:37.794933 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:57:37.794933 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:57:37.798745 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:57:37.802081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:57:37.804819 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 00:57:37.811634 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 00:57:37.837652 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 00:57:37.837780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 00:57:37.839059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 00:57:37.841314 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 00:57:37.841848 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 00:57:37.847695 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 00:57:37.862445 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:57:37.873657 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 00:57:37.885959 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:57:37.886135 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:57:37.888703 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 00:57:37.890855 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 00:57:37.890980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:57:37.895981 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 00:57:37.896121 systemd[1]: Stopped target basic.target - Basic System. Oct 9 00:57:37.899264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 00:57:37.900344 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:57:37.900859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 00:57:37.901158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 00:57:37.901523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:57:37.902103 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 00:57:37.902527 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 00:57:37.903002 systemd[1]: Stopped target swap.target - Swaps. Oct 9 00:57:37.903316 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 00:57:37.903444 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:57:37.916841 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:57:37.917262 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:57:37.917840 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 00:57:37.923539 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:57:37.923806 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 00:57:37.923922 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 00:57:37.929010 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 00:57:37.929147 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:57:37.931314 systemd[1]: Stopped target paths.target - Path Units. Oct 9 00:57:37.932284 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 00:57:37.937534 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:57:37.940476 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 00:57:37.940620 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 00:57:37.941015 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 00:57:37.941118 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:57:37.944099 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 00:57:37.944239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:57:37.946866 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 00:57:37.947030 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:57:37.947822 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 00:57:37.947928 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 00:57:37.966744 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 00:57:37.967727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 00:57:37.967858 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:57:37.971041 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 00:57:37.973391 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 00:57:37.973537 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:57:37.975949 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 00:57:37.976059 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:57:37.983015 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 00:57:37.983148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 00:57:38.002432 ignition[1010]: INFO : Ignition 2.19.0 Oct 9 00:57:38.002432 ignition[1010]: INFO : Stage: umount Oct 9 00:57:38.002432 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:38.002432 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:38.006977 ignition[1010]: INFO : umount: umount passed Oct 9 00:57:38.006977 ignition[1010]: INFO : Ignition finished successfully Oct 9 00:57:38.003741 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 00:57:38.005878 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 00:57:38.006042 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 00:57:38.007716 systemd[1]: Stopped target network.target - Network. Oct 9 00:57:38.007985 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 00:57:38.008120 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 00:57:38.008718 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 00:57:38.008810 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 00:57:38.009082 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 00:57:38.009132 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 00:57:38.009482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 00:57:38.009582 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 00:57:38.010498 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 00:57:38.020606 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 00:57:38.029537 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 00:57:38.029765 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 00:57:38.033427 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 00:57:38.033518 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:57:38.037545 systemd-networkd[773]: eth0: DHCPv6 lease lost Oct 9 00:57:38.040901 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 00:57:38.041092 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 00:57:38.042499 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 00:57:38.042580 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:57:38.052751 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 00:57:38.053895 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 00:57:38.053970 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:57:38.055618 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:57:38.055672 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:57:38.058269 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 00:57:38.058323 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 00:57:38.060787 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:57:38.076414 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 00:57:38.076724 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 00:57:38.086541 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 00:57:38.086800 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:57:38.089415 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 00:57:38.089501 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 00:57:38.091692 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 00:57:38.091747 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:57:38.093903 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 00:57:38.093974 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:57:38.096329 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 00:57:38.096395 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 00:57:38.098491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:57:38.098560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:38.112622 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 00:57:38.124651 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 00:57:38.124734 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:57:38.127174 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:57:38.127240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:38.130115 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 00:57:38.130272 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 00:57:38.257658 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 00:57:38.257848 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 00:57:38.260298 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 00:57:38.261616 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 00:57:38.261684 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 00:57:38.278726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 00:57:38.286299 systemd[1]: Switching root. Oct 9 00:57:38.315568 systemd-journald[193]: Journal stopped Oct 9 00:57:39.523324 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 9 00:57:39.523417 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 00:57:39.523444 kernel: SELinux: policy capability open_perms=1 Oct 9 00:57:39.523460 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 00:57:39.523790 kernel: SELinux: policy capability always_check_network=0 Oct 9 00:57:39.523811 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 00:57:39.523827 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 00:57:39.523851 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 00:57:39.523866 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 00:57:39.523894 kernel: audit: type=1403 audit(1728435458.685:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 00:57:39.523915 systemd[1]: Successfully loaded SELinux policy in 46.386ms. Oct 9 00:57:39.523943 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.209ms. Oct 9 00:57:39.523967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:57:39.523984 systemd[1]: Detected virtualization kvm. Oct 9 00:57:39.524001 systemd[1]: Detected architecture x86-64. Oct 9 00:57:39.524017 systemd[1]: Detected first boot. Oct 9 00:57:39.524033 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:57:39.524049 zram_generator::config[1055]: No configuration found. Oct 9 00:57:39.524072 systemd[1]: Populated /etc with preset unit settings. Oct 9 00:57:39.524089 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 00:57:39.524106 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 00:57:39.524122 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 00:57:39.524149 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 00:57:39.524166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 00:57:39.524192 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 00:57:39.524208 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 00:57:39.524233 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 00:57:39.524251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 00:57:39.524268 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 00:57:39.524284 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 00:57:39.524301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:57:39.524318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:57:39.524334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 00:57:39.524351 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 00:57:39.524368 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 00:57:39.524390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:57:39.524407 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 00:57:39.524423 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:57:39.524440 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 00:57:39.524457 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 00:57:39.524491 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 00:57:39.524509 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 00:57:39.524525 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:57:39.524547 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:57:39.524564 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:57:39.524580 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:57:39.524596 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 00:57:39.524615 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 00:57:39.524632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:57:39.524649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:57:39.524666 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:57:39.524682 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 00:57:39.524703 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 00:57:39.524720 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 00:57:39.524735 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 00:57:39.524752 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:39.524767 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 00:57:39.524782 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 00:57:39.524797 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 00:57:39.524813 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 00:57:39.524828 systemd[1]: Reached target machines.target - Containers. Oct 9 00:57:39.524849 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 00:57:39.524864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:57:39.524879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:57:39.524894 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 00:57:39.524910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:57:39.524924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:57:39.524940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:57:39.524956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 00:57:39.524975 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:57:39.524990 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 00:57:39.525005 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 00:57:39.525021 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 00:57:39.525037 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 00:57:39.525052 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 00:57:39.525068 kernel: loop: module loaded Oct 9 00:57:39.525083 kernel: fuse: init (API version 7.39) Oct 9 00:57:39.525097 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:57:39.525117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:57:39.525147 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 00:57:39.525164 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 00:57:39.525180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:57:39.525196 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 00:57:39.525223 systemd[1]: Stopped verity-setup.service. Oct 9 00:57:39.525243 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:39.525281 systemd-journald[1118]: Collecting audit messages is disabled. Oct 9 00:57:39.525312 kernel: ACPI: bus type drm_connector registered Oct 9 00:57:39.525327 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 00:57:39.525341 systemd-journald[1118]: Journal started Oct 9 00:57:39.525369 systemd-journald[1118]: Runtime Journal (/run/log/journal/09e9a8314f5a458a9d0a9580689d5851) is 6.0M, max 48.4M, 42.3M free. Oct 9 00:57:39.262267 systemd[1]: Queued start job for default target multi-user.target. Oct 9 00:57:39.279675 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 00:57:39.280180 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 00:57:39.540683 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:57:39.542424 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 00:57:39.543713 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 00:57:39.544854 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 00:57:39.546937 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 00:57:39.549858 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 00:57:39.551294 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:57:39.553007 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 00:57:39.553219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 00:57:39.554750 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:57:39.554950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:57:39.556431 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:57:39.556633 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:57:39.558348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:57:39.558593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:57:39.560246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 00:57:39.560433 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 00:57:39.561892 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:57:39.562085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:57:39.563636 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:57:39.565316 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 00:57:39.566847 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 00:57:39.568547 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 00:57:39.586746 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 00:57:39.598566 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 00:57:39.601332 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 00:57:39.602551 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 00:57:39.602591 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:57:39.604739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 00:57:39.608667 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 00:57:39.611778 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 00:57:39.613057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:57:39.615813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 00:57:39.619011 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 00:57:39.620451 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:57:39.622607 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 00:57:39.626609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:57:39.628406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:57:39.631819 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 00:57:39.637671 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 00:57:39.641015 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 00:57:39.644270 systemd-journald[1118]: Time spent on flushing to /var/log/journal/09e9a8314f5a458a9d0a9580689d5851 is 30.828ms for 951 entries. Oct 9 00:57:39.644270 systemd-journald[1118]: System Journal (/var/log/journal/09e9a8314f5a458a9d0a9580689d5851) is 8.0M, max 195.6M, 187.6M free. Oct 9 00:57:39.710525 systemd-journald[1118]: Received client request to flush runtime journal. Oct 9 00:57:39.710590 kernel: loop0: detected capacity change from 0 to 138192 Oct 9 00:57:39.642456 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 00:57:39.644044 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 00:57:39.672406 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 00:57:39.675218 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 00:57:39.690102 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 00:57:39.694071 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:57:39.699334 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 00:57:39.715502 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 00:57:39.726905 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:57:39.759238 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 00:57:39.764894 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 00:57:39.775757 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 00:57:39.818689 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:57:39.822593 kernel: loop1: detected capacity change from 0 to 205544 Oct 9 00:57:39.848373 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Oct 9 00:57:39.848394 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Oct 9 00:57:39.854954 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:57:39.960500 kernel: loop2: detected capacity change from 0 to 140992 Oct 9 00:57:40.009503 kernel: loop3: detected capacity change from 0 to 138192 Oct 9 00:57:40.058529 kernel: loop4: detected capacity change from 0 to 205544 Oct 9 00:57:40.068498 kernel: loop5: detected capacity change from 0 to 140992 Oct 9 00:57:40.077551 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 00:57:40.078189 (sd-merge)[1192]: Merged extensions into '/usr'. Oct 9 00:57:40.082210 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 00:57:40.082228 systemd[1]: Reloading... Oct 9 00:57:40.201540 zram_generator::config[1219]: No configuration found. Oct 9 00:57:40.344369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:57:40.397127 systemd[1]: Reloading finished in 314 ms. Oct 9 00:57:40.433409 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 00:57:40.435314 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 00:57:40.437324 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 00:57:40.456664 systemd[1]: Starting ensure-sysext.service... Oct 9 00:57:40.458993 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:57:40.463884 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 00:57:40.468220 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Oct 9 00:57:40.468241 systemd[1]: Reloading... Oct 9 00:57:40.523717 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 00:57:40.524115 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 00:57:40.525166 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 00:57:40.525492 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Oct 9 00:57:40.525577 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Oct 9 00:57:40.529287 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:57:40.529303 systemd-tmpfiles[1257]: Skipping /boot Oct 9 00:57:40.548030 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:57:40.548209 systemd-tmpfiles[1257]: Skipping /boot Oct 9 00:57:40.574501 zram_generator::config[1289]: No configuration found. Oct 9 00:57:40.681601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:57:40.738577 systemd[1]: Reloading finished in 269 ms. Oct 9 00:57:40.759236 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 00:57:40.764938 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:57:40.775253 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:57:40.778217 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 00:57:40.781179 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 00:57:40.786068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:57:40.788566 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 00:57:40.794339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:40.794595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:57:40.796293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:57:40.799818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:57:40.802586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:57:40.803920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:57:40.807141 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 00:57:40.808199 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:40.809381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:57:40.809720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:57:40.810402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:57:40.810612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:57:40.819511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:40.819789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:57:40.822240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:57:40.828011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:57:40.831056 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:57:40.832683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:57:40.832919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:40.834563 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:57:40.835768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:57:40.838396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:57:40.838630 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:57:40.850117 systemd[1]: Finished ensure-sysext.service. Oct 9 00:57:40.851613 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 00:57:40.858103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:57:40.858564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:57:40.860674 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:57:40.860931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:57:40.865930 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 00:57:40.868021 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 00:57:40.872203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:57:40.872317 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:57:40.880020 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 00:57:40.882650 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 00:57:40.888699 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:57:40.893451 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 00:57:40.902307 augenrules[1373]: No rules Oct 9 00:57:40.904286 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:57:40.904573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:57:40.919370 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 00:57:40.925401 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 00:57:40.935196 systemd-udevd[1370]: Using default interface naming scheme 'v255'. Oct 9 00:57:40.941188 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:57:40.957082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 00:57:40.958839 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 00:57:40.960752 systemd-resolved[1327]: Positive Trust Anchors: Oct 9 00:57:40.960771 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:57:40.960804 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:57:40.978302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:57:40.978836 systemd-resolved[1327]: Defaulting to hostname 'linux'. Oct 9 00:57:40.988695 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:57:41.000037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:57:41.007527 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 00:57:41.008029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:57:41.105170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1392) Oct 9 00:57:41.116514 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1393) Oct 9 00:57:41.130494 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1393) Oct 9 00:57:41.155240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:57:41.166669 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 00:57:41.181708 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 9 00:57:41.186502 kernel: ACPI: button: Power Button [PWRF] Oct 9 00:57:41.190959 systemd-networkd[1387]: lo: Link UP Oct 9 00:57:41.190971 systemd-networkd[1387]: lo: Gained carrier Oct 9 00:57:41.193717 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 00:57:41.201487 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 00:57:41.210084 systemd-networkd[1387]: Enumeration completed Oct 9 00:57:41.210229 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:57:41.211677 systemd[1]: Reached target network.target - Network. Oct 9 00:57:41.214025 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 00:57:41.228093 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 00:57:41.231638 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 00:57:41.215413 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:41.215418 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:57:41.219200 systemd-networkd[1387]: eth0: Link UP Oct 9 00:57:41.219205 systemd-networkd[1387]: eth0: Gained carrier Oct 9 00:57:41.219226 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:41.225321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 00:57:41.232550 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:57:41.233575 systemd-timesyncd[1364]: Network configuration changed, trying to establish connection. Oct 9 00:57:41.234400 systemd-timesyncd[1364]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 00:57:41.234452 systemd-timesyncd[1364]: Initial clock synchronization to Wed 2024-10-09 00:57:41.298095 UTC. Oct 9 00:57:41.236699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:41.241518 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 00:57:41.382510 kernel: kvm_amd: TSC scaling supported Oct 9 00:57:41.382722 kernel: kvm_amd: Nested Virtualization enabled Oct 9 00:57:41.382756 kernel: kvm_amd: Nested Paging enabled Oct 9 00:57:41.382781 kernel: kvm_amd: LBR virtualization supported Oct 9 00:57:41.382810 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 00:57:41.382837 kernel: kvm_amd: Virtual GIF supported Oct 9 00:57:41.397869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:41.409500 kernel: EDAC MC: Ver: 3.0.0 Oct 9 00:57:41.443961 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 00:57:41.456901 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 00:57:41.465148 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:57:41.495813 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 00:57:41.498399 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:57:41.499766 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:57:41.517033 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 00:57:41.518394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 00:57:41.519904 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 00:57:41.521121 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 00:57:41.522434 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 00:57:41.523829 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 00:57:41.523859 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:57:41.524764 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:57:41.526455 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 00:57:41.529461 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 00:57:41.551487 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 00:57:41.553990 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 00:57:41.555660 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 00:57:41.556823 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:57:41.557789 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:57:41.558777 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:57:41.558815 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:57:41.559889 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 00:57:41.562038 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 00:57:41.565522 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:57:41.565874 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 00:57:41.574767 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 00:57:41.574881 jq[1433]: false Oct 9 00:57:41.574888 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 00:57:41.576302 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 00:57:41.583587 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 00:57:41.586671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 00:57:41.591204 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 00:57:41.596628 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 00:57:41.599046 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 00:57:41.599736 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 00:57:41.601698 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 00:57:41.608602 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 00:57:41.611277 dbus-daemon[1432]: [system] SELinux support is enabled Oct 9 00:57:41.611172 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 00:57:41.614027 extend-filesystems[1434]: Found loop3 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found loop4 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found loop5 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found sr0 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda1 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda2 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda3 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found usr Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda4 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda6 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda7 Oct 9 00:57:41.614027 extend-filesystems[1434]: Found vda9 Oct 9 00:57:41.614027 extend-filesystems[1434]: Checking size of /dev/vda9 Oct 9 00:57:41.646179 update_engine[1446]: I20241009 00:57:41.620766 1446 main.cc:92] Flatcar Update Engine starting Oct 9 00:57:41.646179 update_engine[1446]: I20241009 00:57:41.622026 1446 update_check_scheduler.cc:74] Next update check in 8m2s Oct 9 00:57:41.621120 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 00:57:41.646558 jq[1448]: true Oct 9 00:57:41.651906 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 00:57:41.652141 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 00:57:41.652560 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 00:57:41.652771 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 00:57:41.655984 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 00:57:41.656309 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 00:57:41.673350 extend-filesystems[1434]: Resized partition /dev/vda9 Oct 9 00:57:41.676634 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Oct 9 00:57:41.683238 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1397) Oct 9 00:57:41.679599 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 00:57:41.684111 jq[1455]: true Oct 9 00:57:41.687501 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 00:57:41.716223 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 00:57:41.716298 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 00:57:41.717514 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 00:57:41.718516 systemd-logind[1445]: New seat seat0. Oct 9 00:57:41.730925 tar[1454]: linux-amd64/helm Oct 9 00:57:41.740218 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 00:57:41.741651 systemd[1]: Started update-engine.service - Update Engine. Oct 9 00:57:41.744107 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 00:57:41.744290 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 00:57:41.745735 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 00:57:41.745859 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 00:57:41.752434 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 00:57:41.752434 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 00:57:41.752434 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 00:57:41.756288 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Oct 9 00:57:41.754109 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 00:57:41.759101 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 00:57:41.760788 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 00:57:41.772189 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:57:41.777582 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 00:57:41.781112 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 00:57:41.872227 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 00:57:41.889533 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 00:57:41.943205 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 00:57:41.952306 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 00:57:41.968665 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 00:57:41.968904 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 00:57:41.998901 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 00:57:42.019835 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 00:57:42.041071 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 00:57:42.045254 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 00:57:42.046722 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 00:57:42.195380 containerd[1456]: time="2024-10-09T00:57:42.195042098Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 00:57:42.223864 containerd[1456]: time="2024-10-09T00:57:42.223695497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:42.226499 containerd[1456]: time="2024-10-09T00:57:42.226395264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:42.226499 containerd[1456]: time="2024-10-09T00:57:42.226456098Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 00:57:42.226499 containerd[1456]: time="2024-10-09T00:57:42.226503918Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 00:57:42.226773 containerd[1456]: time="2024-10-09T00:57:42.226742220Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 00:57:42.226834 containerd[1456]: time="2024-10-09T00:57:42.226770988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:42.226900 containerd[1456]: time="2024-10-09T00:57:42.226871163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:42.226900 containerd[1456]: time="2024-10-09T00:57:42.226895123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227207 containerd[1456]: time="2024-10-09T00:57:42.227173901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227207 containerd[1456]: time="2024-10-09T00:57:42.227200959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227256 containerd[1456]: time="2024-10-09T00:57:42.227218069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227256 containerd[1456]: time="2024-10-09T00:57:42.227233207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227379 containerd[1456]: time="2024-10-09T00:57:42.227351418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227743 containerd[1456]: time="2024-10-09T00:57:42.227713412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227911 containerd[1456]: time="2024-10-09T00:57:42.227882177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:42.227911 containerd[1456]: time="2024-10-09T00:57:42.227905373Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 00:57:42.228065 containerd[1456]: time="2024-10-09T00:57:42.228043851Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 00:57:42.228141 containerd[1456]: time="2024-10-09T00:57:42.228122883Z" level=info msg="metadata content store policy set" policy=shared Oct 9 00:57:42.233950 containerd[1456]: time="2024-10-09T00:57:42.233903915Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 00:57:42.234016 containerd[1456]: time="2024-10-09T00:57:42.233979587Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 00:57:42.234016 containerd[1456]: time="2024-10-09T00:57:42.234007278Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 00:57:42.234054 containerd[1456]: time="2024-10-09T00:57:42.234023484Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 00:57:42.234054 containerd[1456]: time="2024-10-09T00:57:42.234038028Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 00:57:42.234271 containerd[1456]: time="2024-10-09T00:57:42.234241074Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 00:57:42.234561 containerd[1456]: time="2024-10-09T00:57:42.234494936Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 00:57:42.234646 containerd[1456]: time="2024-10-09T00:57:42.234626354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 00:57:42.234683 containerd[1456]: time="2024-10-09T00:57:42.234647054Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 00:57:42.234683 containerd[1456]: time="2024-10-09T00:57:42.234660956Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 00:57:42.234683 containerd[1456]: time="2024-10-09T00:57:42.234674536Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234771 containerd[1456]: time="2024-10-09T00:57:42.234690428Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234771 containerd[1456]: time="2024-10-09T00:57:42.234714730Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234771 containerd[1456]: time="2024-10-09T00:57:42.234729034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234771 containerd[1456]: time="2024-10-09T00:57:42.234742794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234771 containerd[1456]: time="2024-10-09T00:57:42.234755297Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234771 containerd[1456]: time="2024-10-09T00:57:42.234767247Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234778220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234809664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234835465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234849849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234863640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234875218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234887500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234898724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234913108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234924646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.234932 containerd[1456]: time="2024-10-09T00:57:42.234937622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.234960596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.234973481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.234985240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235007490Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235029458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235041821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235052060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235114586Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235136865Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235147065Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235158884Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235168309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235166 containerd[1456]: time="2024-10-09T00:57:42.235182100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 00:57:42.235513 containerd[1456]: time="2024-10-09T00:57:42.235196785Z" level=info msg="NRI interface is disabled by configuration." Oct 9 00:57:42.235513 containerd[1456]: time="2024-10-09T00:57:42.235207065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 00:57:42.235610 containerd[1456]: time="2024-10-09T00:57:42.235554263Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 00:57:42.235610 containerd[1456]: time="2024-10-09T00:57:42.235611115Z" level=info msg="Connect containerd service" Oct 9 00:57:42.235971 containerd[1456]: time="2024-10-09T00:57:42.235637871Z" level=info msg="using legacy CRI server" Oct 9 00:57:42.235971 containerd[1456]: time="2024-10-09T00:57:42.235645184Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 00:57:42.235971 containerd[1456]: time="2024-10-09T00:57:42.235782758Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 00:57:42.236582 containerd[1456]: time="2024-10-09T00:57:42.236557512Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:57:42.236895 containerd[1456]: time="2024-10-09T00:57:42.236844508Z" level=info msg="Start subscribing containerd event" Oct 9 00:57:42.237077 containerd[1456]: time="2024-10-09T00:57:42.236970102Z" level=info msg="Start recovering state" Oct 9 00:57:42.237077 containerd[1456]: time="2024-10-09T00:57:42.236973864Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 00:57:42.237169 containerd[1456]: time="2024-10-09T00:57:42.237144450Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 00:57:42.237223 containerd[1456]: time="2024-10-09T00:57:42.237207075Z" level=info msg="Start event monitor" Oct 9 00:57:42.239156 containerd[1456]: time="2024-10-09T00:57:42.237419556Z" level=info msg="Start snapshots syncer" Oct 9 00:57:42.239156 containerd[1456]: time="2024-10-09T00:57:42.237441665Z" level=info msg="Start cni network conf syncer for default" Oct 9 00:57:42.239156 containerd[1456]: time="2024-10-09T00:57:42.237450065Z" level=info msg="Start streaming server" Oct 9 00:57:42.237706 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 00:57:42.239792 containerd[1456]: time="2024-10-09T00:57:42.239771150Z" level=info msg="containerd successfully booted in 0.046958s" Oct 9 00:57:42.346484 tar[1454]: linux-amd64/LICENSE Oct 9 00:57:42.346647 tar[1454]: linux-amd64/README.md Oct 9 00:57:42.380595 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 00:57:43.054267 systemd-networkd[1387]: eth0: Gained IPv6LL Oct 9 00:57:43.057983 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 00:57:43.059888 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 00:57:43.074724 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 00:57:43.077456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:57:43.079824 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 00:57:43.100747 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 00:57:43.101547 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 00:57:43.103207 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 00:57:43.104844 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 00:57:44.237947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:57:44.239988 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 00:57:44.243310 systemd[1]: Startup finished in 933ms (kernel) + 5.934s (initrd) + 5.603s (userspace) = 12.471s. Oct 9 00:57:44.265954 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:57:44.911369 kubelet[1545]: E1009 00:57:44.911266 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:57:44.916083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:57:44.916333 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:57:44.916775 systemd[1]: kubelet.service: Consumed 1.684s CPU time. Oct 9 00:57:51.474216 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 00:57:51.475595 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:53372.service - OpenSSH per-connection server daemon (10.0.0.1:53372). Oct 9 00:57:51.526274 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 53372 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:57:51.528408 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:57:51.538010 systemd-logind[1445]: New session 1 of user core. Oct 9 00:57:51.539358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 00:57:51.548768 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 00:57:51.561790 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 00:57:51.564678 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 00:57:51.573918 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 00:57:51.689741 systemd[1562]: Queued start job for default target default.target. Oct 9 00:57:51.702821 systemd[1562]: Created slice app.slice - User Application Slice. Oct 9 00:57:51.702848 systemd[1562]: Reached target paths.target - Paths. Oct 9 00:57:51.702862 systemd[1562]: Reached target timers.target - Timers. Oct 9 00:57:51.704569 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 00:57:51.716313 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 00:57:51.716459 systemd[1562]: Reached target sockets.target - Sockets. Oct 9 00:57:51.716515 systemd[1562]: Reached target basic.target - Basic System. Oct 9 00:57:51.716556 systemd[1562]: Reached target default.target - Main User Target. Oct 9 00:57:51.716592 systemd[1562]: Startup finished in 135ms. Oct 9 00:57:51.717505 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 00:57:51.719406 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 00:57:51.779818 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:53374.service - OpenSSH per-connection server daemon (10.0.0.1:53374). Oct 9 00:57:51.820351 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 53374 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:57:51.822140 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:57:51.825992 systemd-logind[1445]: New session 2 of user core. Oct 9 00:57:51.841599 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 00:57:51.897377 sshd[1573]: pam_unix(sshd:session): session closed for user core Oct 9 00:57:51.916560 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:53374.service: Deactivated successfully. Oct 9 00:57:51.918346 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 00:57:51.920141 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Oct 9 00:57:51.927923 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:53376.service - OpenSSH per-connection server daemon (10.0.0.1:53376). Oct 9 00:57:51.929010 systemd-logind[1445]: Removed session 2. Oct 9 00:57:51.962692 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 53376 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:57:51.964412 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:57:51.968754 systemd-logind[1445]: New session 3 of user core. Oct 9 00:57:51.984608 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 00:57:52.034870 sshd[1580]: pam_unix(sshd:session): session closed for user core Oct 9 00:57:52.053786 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:53376.service: Deactivated successfully. Oct 9 00:57:52.057717 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 00:57:52.059464 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Oct 9 00:57:52.070791 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:53392.service - OpenSSH per-connection server daemon (10.0.0.1:53392). Oct 9 00:57:52.071799 systemd-logind[1445]: Removed session 3. Oct 9 00:57:52.105491 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 53392 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:57:52.107176 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:57:52.111898 systemd-logind[1445]: New session 4 of user core. Oct 9 00:57:52.129730 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 00:57:52.185836 sshd[1588]: pam_unix(sshd:session): session closed for user core Oct 9 00:57:52.197568 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:53392.service: Deactivated successfully. Oct 9 00:57:52.199432 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 00:57:52.201162 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Oct 9 00:57:52.207902 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:53398.service - OpenSSH per-connection server daemon (10.0.0.1:53398). Oct 9 00:57:52.208874 systemd-logind[1445]: Removed session 4. Oct 9 00:57:52.241847 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 53398 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:57:52.243504 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:57:52.247565 systemd-logind[1445]: New session 5 of user core. Oct 9 00:57:52.262607 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 00:57:52.321733 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 00:57:52.322099 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:57:52.346865 sudo[1599]: pam_unix(sudo:session): session closed for user root Oct 9 00:57:52.348939 sshd[1596]: pam_unix(sshd:session): session closed for user core Oct 9 00:57:52.356854 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:53398.service: Deactivated successfully. Oct 9 00:57:52.358968 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 00:57:52.360737 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Oct 9 00:57:52.362210 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:53410.service - OpenSSH per-connection server daemon (10.0.0.1:53410). Oct 9 00:57:52.363096 systemd-logind[1445]: Removed session 5. Oct 9 00:57:52.414485 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 53410 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:57:52.416348 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:57:52.421176 systemd-logind[1445]: New session 6 of user core. Oct 9 00:57:52.428625 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 00:57:52.484218 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 00:57:52.484590 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:57:52.488974 sudo[1608]: pam_unix(sudo:session): session closed for user root Oct 9 00:57:52.496076 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 00:57:52.496435 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:57:52.529834 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:57:52.562617 augenrules[1630]: No rules Oct 9 00:57:52.564862 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:57:52.565133 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:57:52.566446 sudo[1607]: pam_unix(sudo:session): session closed for user root Oct 9 00:57:52.568398 sshd[1604]: pam_unix(sshd:session): session closed for user core Oct 9 00:57:52.580432 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:53410.service: Deactivated successfully. Oct 9 00:57:52.582561 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 00:57:52.584274 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Oct 9 00:57:52.589857 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:53422.service - OpenSSH per-connection server daemon (10.0.0.1:53422). Oct 9 00:57:52.590688 systemd-logind[1445]: Removed session 6. Oct 9 00:57:52.626431 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 53422 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:57:52.628379 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:57:52.632659 systemd-logind[1445]: New session 7 of user core. Oct 9 00:57:52.642616 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 00:57:52.698057 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 00:57:52.698588 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:57:53.302750 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 00:57:53.302865 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 00:57:54.191997 dockerd[1662]: time="2024-10-09T00:57:54.191898943Z" level=info msg="Starting up" Oct 9 00:57:54.404496 dockerd[1662]: time="2024-10-09T00:57:54.404380508Z" level=info msg="Loading containers: start." Oct 9 00:57:54.613499 kernel: Initializing XFRM netlink socket Oct 9 00:57:54.713050 systemd-networkd[1387]: docker0: Link UP Oct 9 00:57:54.749588 dockerd[1662]: time="2024-10-09T00:57:54.749520304Z" level=info msg="Loading containers: done." Oct 9 00:57:54.795812 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1705305208-merged.mount: Deactivated successfully. Oct 9 00:57:54.797403 dockerd[1662]: time="2024-10-09T00:57:54.797344321Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 00:57:54.797592 dockerd[1662]: time="2024-10-09T00:57:54.797558615Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 00:57:54.797811 dockerd[1662]: time="2024-10-09T00:57:54.797779032Z" level=info msg="Daemon has completed initialization" Oct 9 00:57:54.838829 dockerd[1662]: time="2024-10-09T00:57:54.838731145Z" level=info msg="API listen on /run/docker.sock" Oct 9 00:57:54.838913 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 00:57:55.166573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 00:57:55.175793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:57:55.376106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:57:55.381793 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:57:55.445322 containerd[1456]: time="2024-10-09T00:57:55.445201091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 9 00:57:55.682950 kubelet[1865]: E1009 00:57:55.682863 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:57:55.690433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:57:55.690692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:57:56.240186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913185998.mount: Deactivated successfully. Oct 9 00:57:57.343588 containerd[1456]: time="2024-10-09T00:57:57.343522479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:57:57.344141 containerd[1456]: time="2024-10-09T00:57:57.344103184Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066621" Oct 9 00:57:57.345671 containerd[1456]: time="2024-10-09T00:57:57.345598596Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:57:57.348564 containerd[1456]: time="2024-10-09T00:57:57.348536070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:57:57.349604 containerd[1456]: time="2024-10-09T00:57:57.349552189Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 1.904303851s" Oct 9 00:57:57.349673 containerd[1456]: time="2024-10-09T00:57:57.349611439Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 9 00:57:57.351361 containerd[1456]: time="2024-10-09T00:57:57.351337191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 9 00:57:58.924538 containerd[1456]: time="2024-10-09T00:57:58.924453171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:57:58.925215 containerd[1456]: time="2024-10-09T00:57:58.925156767Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690922" Oct 9 00:57:58.926415 containerd[1456]: time="2024-10-09T00:57:58.926372457Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:57:58.931225 containerd[1456]: time="2024-10-09T00:57:58.931142273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:57:58.932602 containerd[1456]: time="2024-10-09T00:57:58.932571155Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 1.581202971s" Oct 9 00:57:58.932602 containerd[1456]: time="2024-10-09T00:57:58.932603461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 9 00:57:58.933107 containerd[1456]: time="2024-10-09T00:57:58.933084203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 9 00:58:00.218342 containerd[1456]: time="2024-10-09T00:58:00.218254204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:00.219017 containerd[1456]: time="2024-10-09T00:58:00.218950066Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646758" Oct 9 00:58:00.220332 containerd[1456]: time="2024-10-09T00:58:00.220268860Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:00.223855 containerd[1456]: time="2024-10-09T00:58:00.223814099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:00.224915 containerd[1456]: time="2024-10-09T00:58:00.224879873Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.291765342s" Oct 9 00:58:00.224915 containerd[1456]: time="2024-10-09T00:58:00.224914331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 9 00:58:00.225788 containerd[1456]: time="2024-10-09T00:58:00.225747109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 9 00:58:01.378947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230407125.mount: Deactivated successfully. Oct 9 00:58:02.045276 containerd[1456]: time="2024-10-09T00:58:02.045182903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:02.045924 containerd[1456]: time="2024-10-09T00:58:02.045850836Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208881" Oct 9 00:58:02.047083 containerd[1456]: time="2024-10-09T00:58:02.047044440Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:02.049114 containerd[1456]: time="2024-10-09T00:58:02.049077908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:02.049648 containerd[1456]: time="2024-10-09T00:58:02.049616878Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 1.8238307s" Oct 9 00:58:02.049693 containerd[1456]: time="2024-10-09T00:58:02.049663025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 9 00:58:02.050498 containerd[1456]: time="2024-10-09T00:58:02.050460453Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 00:58:02.725071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157849944.mount: Deactivated successfully. Oct 9 00:58:03.499734 containerd[1456]: time="2024-10-09T00:58:03.499616852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:03.500683 containerd[1456]: time="2024-10-09T00:58:03.500624375Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 00:58:03.501895 containerd[1456]: time="2024-10-09T00:58:03.501847272Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:03.507032 containerd[1456]: time="2024-10-09T00:58:03.506999679Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.456495125s" Oct 9 00:58:03.507032 containerd[1456]: time="2024-10-09T00:58:03.507034689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 00:58:03.507790 containerd[1456]: time="2024-10-09T00:58:03.507745722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 9 00:58:03.508179 containerd[1456]: time="2024-10-09T00:58:03.508150045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:04.005777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245633760.mount: Deactivated successfully. Oct 9 00:58:04.011993 containerd[1456]: time="2024-10-09T00:58:04.011940066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:04.012695 containerd[1456]: time="2024-10-09T00:58:04.012619092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 9 00:58:04.013725 containerd[1456]: time="2024-10-09T00:58:04.013687342Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:04.016063 containerd[1456]: time="2024-10-09T00:58:04.016030990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:04.016786 containerd[1456]: time="2024-10-09T00:58:04.016748405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.972506ms" Oct 9 00:58:04.016841 containerd[1456]: time="2024-10-09T00:58:04.016787797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 9 00:58:04.017400 containerd[1456]: time="2024-10-09T00:58:04.017373518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 9 00:58:04.556768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909980227.mount: Deactivated successfully. Oct 9 00:58:05.783199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 00:58:05.795829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:06.013077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:06.018172 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:58:06.119243 kubelet[2054]: E1009 00:58:06.116975 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:58:06.122785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:58:06.123051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:58:06.741320 containerd[1456]: time="2024-10-09T00:58:06.741219716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:07.173460 containerd[1456]: time="2024-10-09T00:58:07.173360978Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241740" Oct 9 00:58:07.176623 containerd[1456]: time="2024-10-09T00:58:07.175992016Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:07.179320 containerd[1456]: time="2024-10-09T00:58:07.179267654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:07.180804 containerd[1456]: time="2024-10-09T00:58:07.180744523Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.163342848s" Oct 9 00:58:07.180804 containerd[1456]: time="2024-10-09T00:58:07.180783247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 9 00:58:09.034080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:09.042667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:09.066822 systemd[1]: Reloading requested from client PID 2095 ('systemctl') (unit session-7.scope)... Oct 9 00:58:09.066848 systemd[1]: Reloading... Oct 9 00:58:09.158512 zram_generator::config[2137]: No configuration found. Oct 9 00:58:09.412046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:58:09.489018 systemd[1]: Reloading finished in 421 ms. Oct 9 00:58:09.534920 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:58:09.535061 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:58:09.535439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:09.549780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:09.699050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:09.704011 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:58:09.781932 kubelet[2182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:09.781932 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:58:09.781932 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:09.781932 kubelet[2182]: I1009 00:58:09.781306 2182 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:58:10.087595 kubelet[2182]: I1009 00:58:10.087552 2182 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 00:58:10.087595 kubelet[2182]: I1009 00:58:10.087583 2182 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:58:10.087850 kubelet[2182]: I1009 00:58:10.087824 2182 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 00:58:10.108432 kubelet[2182]: I1009 00:58:10.108395 2182 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:58:10.108854 kubelet[2182]: E1009 00:58:10.108817 2182 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:10.114941 kubelet[2182]: E1009 00:58:10.114910 2182 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 00:58:10.114941 kubelet[2182]: I1009 00:58:10.114937 2182 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 00:58:10.122242 kubelet[2182]: I1009 00:58:10.122207 2182 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:58:10.123147 kubelet[2182]: I1009 00:58:10.123115 2182 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 00:58:10.123330 kubelet[2182]: I1009 00:58:10.123292 2182 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:58:10.123501 kubelet[2182]: I1009 00:58:10.123320 2182 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 00:58:10.123614 kubelet[2182]: I1009 00:58:10.123510 2182 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:58:10.123614 kubelet[2182]: I1009 00:58:10.123521 2182 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 00:58:10.123675 kubelet[2182]: I1009 00:58:10.123658 2182 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:10.125042 kubelet[2182]: I1009 00:58:10.125006 2182 kubelet.go:408] "Attempting to sync node with API server" Oct 9 00:58:10.125090 kubelet[2182]: I1009 00:58:10.125046 2182 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:58:10.125144 kubelet[2182]: I1009 00:58:10.125124 2182 kubelet.go:314] "Adding apiserver pod source" Oct 9 00:58:10.125188 kubelet[2182]: I1009 00:58:10.125181 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:58:10.130746 kubelet[2182]: W1009 00:58:10.130537 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Oct 9 00:58:10.130746 kubelet[2182]: E1009 00:58:10.130615 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:10.130866 kubelet[2182]: W1009 00:58:10.130747 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Oct 9 00:58:10.130866 kubelet[2182]: E1009 00:58:10.130790 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:10.131878 kubelet[2182]: I1009 00:58:10.131852 2182 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:58:10.133238 kubelet[2182]: I1009 00:58:10.133209 2182 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:58:10.133701 kubelet[2182]: W1009 00:58:10.133675 2182 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 00:58:10.134377 kubelet[2182]: I1009 00:58:10.134354 2182 server.go:1269] "Started kubelet" Oct 9 00:58:10.134751 kubelet[2182]: I1009 00:58:10.134635 2182 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:58:10.135418 kubelet[2182]: I1009 00:58:10.135017 2182 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:58:10.135418 kubelet[2182]: I1009 00:58:10.135087 2182 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:58:10.135747 kubelet[2182]: I1009 00:58:10.135718 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:58:10.135924 kubelet[2182]: I1009 00:58:10.135899 2182 server.go:460] "Adding debug handlers to kubelet server" Oct 9 00:58:10.136790 kubelet[2182]: I1009 00:58:10.136654 2182 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 00:58:10.137351 kubelet[2182]: E1009 00:58:10.137331 2182 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:58:10.138157 kubelet[2182]: E1009 00:58:10.138123 2182 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:58:10.138222 kubelet[2182]: I1009 00:58:10.138174 2182 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 00:58:10.138402 kubelet[2182]: I1009 00:58:10.138277 2182 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 00:58:10.138402 kubelet[2182]: I1009 00:58:10.138349 2182 reconciler.go:26] "Reconciler: start to sync state" Oct 9 00:58:10.138589 kubelet[2182]: W1009 00:58:10.138545 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Oct 9 00:58:10.138589 kubelet[2182]: E1009 00:58:10.138584 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:10.140059 kubelet[2182]: I1009 00:58:10.138717 2182 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:58:10.140059 kubelet[2182]: I1009 00:58:10.138794 2182 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:58:10.140059 kubelet[2182]: E1009 00:58:10.138800 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Oct 9 00:58:10.140059 kubelet[2182]: I1009 00:58:10.139681 2182 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:58:10.147839 kubelet[2182]: E1009 00:58:10.140788 2182 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca2ebaea286ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 00:58:10.134329005 +0000 UTC m=+0.426438399,LastTimestamp:2024-10-09 00:58:10.134329005 +0000 UTC m=+0.426438399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 00:58:10.158777 kubelet[2182]: I1009 00:58:10.158726 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:58:10.161070 kubelet[2182]: I1009 00:58:10.161041 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:58:10.161172 kubelet[2182]: I1009 00:58:10.161084 2182 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:58:10.161172 kubelet[2182]: I1009 00:58:10.161105 2182 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 00:58:10.161172 kubelet[2182]: E1009 00:58:10.161146 2182 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:58:10.161633 kubelet[2182]: W1009 00:58:10.161585 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Oct 9 00:58:10.161633 kubelet[2182]: E1009 00:58:10.161621 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:10.162325 kubelet[2182]: I1009 00:58:10.162303 2182 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:58:10.162325 kubelet[2182]: I1009 00:58:10.162319 2182 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:58:10.162424 kubelet[2182]: I1009 00:58:10.162338 2182 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:10.196402 kubelet[2182]: I1009 00:58:10.196353 2182 policy_none.go:49] "None policy: Start" Oct 9 00:58:10.196957 kubelet[2182]: I1009 00:58:10.196931 2182 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:58:10.197018 kubelet[2182]: I1009 00:58:10.196963 2182 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:58:10.203328 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 00:58:10.221589 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 00:58:10.224521 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 00:58:10.238440 kubelet[2182]: E1009 00:58:10.238412 2182 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:58:10.243402 kubelet[2182]: I1009 00:58:10.243349 2182 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:58:10.243618 kubelet[2182]: I1009 00:58:10.243598 2182 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 00:58:10.243702 kubelet[2182]: I1009 00:58:10.243617 2182 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 00:58:10.244043 kubelet[2182]: I1009 00:58:10.243805 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:58:10.245076 kubelet[2182]: E1009 00:58:10.245042 2182 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 00:58:10.269117 systemd[1]: Created slice kubepods-burstable-pode69611052a242eb6cd7b80389e8daae3.slice - libcontainer container kubepods-burstable-pode69611052a242eb6cd7b80389e8daae3.slice. Oct 9 00:58:10.291623 systemd[1]: Created slice kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice - libcontainer container kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice. Oct 9 00:58:10.295919 systemd[1]: Created slice kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice - libcontainer container kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice. Oct 9 00:58:10.340081 kubelet[2182]: E1009 00:58:10.339948 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Oct 9 00:58:10.344923 kubelet[2182]: I1009 00:58:10.344890 2182 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:58:10.345245 kubelet[2182]: E1009 00:58:10.345204 2182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Oct 9 00:58:10.439646 kubelet[2182]: I1009 00:58:10.439613 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e69611052a242eb6cd7b80389e8daae3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e69611052a242eb6cd7b80389e8daae3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:10.439646 kubelet[2182]: I1009 00:58:10.439645 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:10.439646 kubelet[2182]: I1009 00:58:10.439663 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:10.439843 kubelet[2182]: I1009 00:58:10.439678 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:58:10.439843 kubelet[2182]: I1009 00:58:10.439693 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e69611052a242eb6cd7b80389e8daae3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e69611052a242eb6cd7b80389e8daae3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:10.439843 kubelet[2182]: I1009 00:58:10.439708 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e69611052a242eb6cd7b80389e8daae3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e69611052a242eb6cd7b80389e8daae3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:10.439843 kubelet[2182]: I1009 00:58:10.439721 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:10.439843 kubelet[2182]: I1009 00:58:10.439735 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:10.440040 kubelet[2182]: I1009 00:58:10.439749 2182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:10.546542 kubelet[2182]: I1009 00:58:10.546506 2182 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:58:10.546863 kubelet[2182]: E1009 00:58:10.546832 2182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Oct 9 00:58:10.590147 kubelet[2182]: E1009 00:58:10.590069 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:10.590752 containerd[1456]: time="2024-10-09T00:58:10.590676913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e69611052a242eb6cd7b80389e8daae3,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:10.593911 kubelet[2182]: E1009 00:58:10.593885 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:10.594454 containerd[1456]: time="2024-10-09T00:58:10.594390792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:10.598599 kubelet[2182]: E1009 00:58:10.598564 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:10.598857 containerd[1456]: time="2024-10-09T00:58:10.598829465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:10.740409 kubelet[2182]: E1009 00:58:10.740347 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Oct 9 00:58:10.949628 kubelet[2182]: I1009 00:58:10.949425 2182 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:58:10.950195 kubelet[2182]: E1009 00:58:10.949835 2182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Oct 9 00:58:11.060002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544990695.mount: Deactivated successfully. Oct 9 00:58:11.061052 kubelet[2182]: W1009 00:58:11.060965 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Oct 9 00:58:11.061177 kubelet[2182]: E1009 00:58:11.061057 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:11.068082 containerd[1456]: time="2024-10-09T00:58:11.068043053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:11.069001 containerd[1456]: time="2024-10-09T00:58:11.068956515Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:11.069730 containerd[1456]: time="2024-10-09T00:58:11.069675824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:58:11.070687 containerd[1456]: time="2024-10-09T00:58:11.070649746Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:11.071452 containerd[1456]: time="2024-10-09T00:58:11.071399962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 00:58:11.072419 containerd[1456]: time="2024-10-09T00:58:11.072384131Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:11.073222 containerd[1456]: time="2024-10-09T00:58:11.073185884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:58:11.075732 containerd[1456]: time="2024-10-09T00:58:11.075699658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:11.077429 containerd[1456]: time="2024-10-09T00:58:11.077401451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.51051ms" Oct 9 00:58:11.078048 containerd[1456]: time="2024-10-09T00:58:11.078019589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 487.17825ms" Oct 9 00:58:11.078620 containerd[1456]: time="2024-10-09T00:58:11.078591240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.07444ms" Oct 9 00:58:11.265996 containerd[1456]: time="2024-10-09T00:58:11.265757494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:11.265996 containerd[1456]: time="2024-10-09T00:58:11.265810166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:11.265996 containerd[1456]: time="2024-10-09T00:58:11.265821584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:11.265996 containerd[1456]: time="2024-10-09T00:58:11.265891327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:11.268566 containerd[1456]: time="2024-10-09T00:58:11.268145685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:11.268566 containerd[1456]: time="2024-10-09T00:58:11.268195760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:11.268566 containerd[1456]: time="2024-10-09T00:58:11.268209915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:11.268566 containerd[1456]: time="2024-10-09T00:58:11.268289262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:11.270637 containerd[1456]: time="2024-10-09T00:58:11.269707711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:11.270637 containerd[1456]: time="2024-10-09T00:58:11.269781003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:11.270637 containerd[1456]: time="2024-10-09T00:58:11.269794166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:11.270637 containerd[1456]: time="2024-10-09T00:58:11.269882215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:11.299658 systemd[1]: Started cri-containerd-894d95444a0ca0ce9abe5383d186f93ddbae47ddb779e54f7cdcc17694f6ac22.scope - libcontainer container 894d95444a0ca0ce9abe5383d186f93ddbae47ddb779e54f7cdcc17694f6ac22. Oct 9 00:58:11.301795 systemd[1]: Started cri-containerd-9504c3b8fb55c22f514958ed4364ae7a0a0fcae513a9840cfdefd121311bdd41.scope - libcontainer container 9504c3b8fb55c22f514958ed4364ae7a0a0fcae513a9840cfdefd121311bdd41. Oct 9 00:58:11.307655 systemd[1]: Started cri-containerd-20a81d52c17c08f356d352216821f1e078d754f9923599b523c7018eb010708a.scope - libcontainer container 20a81d52c17c08f356d352216821f1e078d754f9923599b523c7018eb010708a. Oct 9 00:58:11.343857 kubelet[2182]: W1009 00:58:11.342377 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Oct 9 00:58:11.343857 kubelet[2182]: E1009 00:58:11.342449 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:11.356176 containerd[1456]: time="2024-10-09T00:58:11.356122988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e69611052a242eb6cd7b80389e8daae3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9504c3b8fb55c22f514958ed4364ae7a0a0fcae513a9840cfdefd121311bdd41\"" Oct 9 00:58:11.357745 kubelet[2182]: E1009 00:58:11.357712 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:11.361653 containerd[1456]: time="2024-10-09T00:58:11.361616583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"20a81d52c17c08f356d352216821f1e078d754f9923599b523c7018eb010708a\"" Oct 9 00:58:11.361889 containerd[1456]: time="2024-10-09T00:58:11.361861272Z" level=info msg="CreateContainer within sandbox \"9504c3b8fb55c22f514958ed4364ae7a0a0fcae513a9840cfdefd121311bdd41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 00:58:11.362549 kubelet[2182]: E1009 00:58:11.362518 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:11.365213 containerd[1456]: time="2024-10-09T00:58:11.365183436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"894d95444a0ca0ce9abe5383d186f93ddbae47ddb779e54f7cdcc17694f6ac22\"" Oct 9 00:58:11.366448 containerd[1456]: time="2024-10-09T00:58:11.366412172Z" level=info msg="CreateContainer within sandbox \"20a81d52c17c08f356d352216821f1e078d754f9923599b523c7018eb010708a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 00:58:11.366879 kubelet[2182]: E1009 00:58:11.366741 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:11.368452 containerd[1456]: time="2024-10-09T00:58:11.368403757Z" level=info msg="CreateContainer within sandbox \"894d95444a0ca0ce9abe5383d186f93ddbae47ddb779e54f7cdcc17694f6ac22\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 00:58:11.390824 containerd[1456]: time="2024-10-09T00:58:11.390784087Z" level=info msg="CreateContainer within sandbox \"9504c3b8fb55c22f514958ed4364ae7a0a0fcae513a9840cfdefd121311bdd41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af2016929ded361a981d7e3f6d3542c8a42e32e988d97791899cf6cdf833d99c\"" Oct 9 00:58:11.391515 containerd[1456]: time="2024-10-09T00:58:11.391457282Z" level=info msg="StartContainer for \"af2016929ded361a981d7e3f6d3542c8a42e32e988d97791899cf6cdf833d99c\"" Oct 9 00:58:11.394081 containerd[1456]: time="2024-10-09T00:58:11.394003535Z" level=info msg="CreateContainer within sandbox \"20a81d52c17c08f356d352216821f1e078d754f9923599b523c7018eb010708a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b7b0475105c554032a46b38c9576be75a51fcdf9dd3bbf7edac4d2af6a6ba97\"" Oct 9 00:58:11.394373 containerd[1456]: time="2024-10-09T00:58:11.394347982Z" level=info msg="StartContainer for \"6b7b0475105c554032a46b38c9576be75a51fcdf9dd3bbf7edac4d2af6a6ba97\"" Oct 9 00:58:11.402211 containerd[1456]: time="2024-10-09T00:58:11.402093599Z" level=info msg="CreateContainer within sandbox \"894d95444a0ca0ce9abe5383d186f93ddbae47ddb779e54f7cdcc17694f6ac22\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a91f92759dfd161a7549014b2b27711e83e05579400e8c00e3afcb903ccb855\"" Oct 9 00:58:11.403069 containerd[1456]: time="2024-10-09T00:58:11.403050390Z" level=info msg="StartContainer for \"0a91f92759dfd161a7549014b2b27711e83e05579400e8c00e3afcb903ccb855\"" Oct 9 00:58:11.428628 systemd[1]: Started cri-containerd-af2016929ded361a981d7e3f6d3542c8a42e32e988d97791899cf6cdf833d99c.scope - libcontainer container af2016929ded361a981d7e3f6d3542c8a42e32e988d97791899cf6cdf833d99c. Oct 9 00:58:11.431308 systemd[1]: Started cri-containerd-0a91f92759dfd161a7549014b2b27711e83e05579400e8c00e3afcb903ccb855.scope - libcontainer container 0a91f92759dfd161a7549014b2b27711e83e05579400e8c00e3afcb903ccb855. Oct 9 00:58:11.435070 systemd[1]: Started cri-containerd-6b7b0475105c554032a46b38c9576be75a51fcdf9dd3bbf7edac4d2af6a6ba97.scope - libcontainer container 6b7b0475105c554032a46b38c9576be75a51fcdf9dd3bbf7edac4d2af6a6ba97. Oct 9 00:58:11.482250 containerd[1456]: time="2024-10-09T00:58:11.482195973Z" level=info msg="StartContainer for \"0a91f92759dfd161a7549014b2b27711e83e05579400e8c00e3afcb903ccb855\" returns successfully" Oct 9 00:58:11.496675 containerd[1456]: time="2024-10-09T00:58:11.496610068Z" level=info msg="StartContainer for \"af2016929ded361a981d7e3f6d3542c8a42e32e988d97791899cf6cdf833d99c\" returns successfully" Oct 9 00:58:11.496675 containerd[1456]: time="2024-10-09T00:58:11.496636283Z" level=info msg="StartContainer for \"6b7b0475105c554032a46b38c9576be75a51fcdf9dd3bbf7edac4d2af6a6ba97\" returns successfully" Oct 9 00:58:11.499102 kubelet[2182]: W1009 00:58:11.498995 2182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Oct 9 00:58:11.499102 kubelet[2182]: E1009 00:58:11.499070 2182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:58:11.755513 kubelet[2182]: I1009 00:58:11.754825 2182 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:58:12.170084 kubelet[2182]: E1009 00:58:12.170030 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:12.173507 kubelet[2182]: E1009 00:58:12.172273 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:12.174532 kubelet[2182]: E1009 00:58:12.174502 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:13.074307 kubelet[2182]: E1009 00:58:13.074252 2182 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 00:58:13.176899 kubelet[2182]: E1009 00:58:13.176859 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:13.310830 kubelet[2182]: I1009 00:58:13.310783 2182 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 9 00:58:13.784172 kubelet[2182]: E1009 00:58:13.784139 2182 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:13.784303 kubelet[2182]: E1009 00:58:13.784290 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:14.133967 kubelet[2182]: I1009 00:58:14.133787 2182 apiserver.go:52] "Watching apiserver" Oct 9 00:58:14.139124 kubelet[2182]: I1009 00:58:14.139083 2182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 00:58:14.682559 kubelet[2182]: E1009 00:58:14.682512 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:15.178944 kubelet[2182]: E1009 00:58:15.178909 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:15.235171 systemd[1]: Reloading requested from client PID 2461 ('systemctl') (unit session-7.scope)... Oct 9 00:58:15.235187 systemd[1]: Reloading... Oct 9 00:58:15.452540 zram_generator::config[2500]: No configuration found. Oct 9 00:58:15.563711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:58:15.655505 systemd[1]: Reloading finished in 419 ms. Oct 9 00:58:15.698999 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:15.709117 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 00:58:15.709432 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:15.722914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:15.872399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:15.877485 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:58:16.028295 kubelet[2545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:16.028295 kubelet[2545]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:58:16.028295 kubelet[2545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:16.028755 kubelet[2545]: I1009 00:58:16.028336 2545 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:58:16.034969 kubelet[2545]: I1009 00:58:16.034926 2545 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 00:58:16.034969 kubelet[2545]: I1009 00:58:16.034962 2545 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:58:16.035327 kubelet[2545]: I1009 00:58:16.035305 2545 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 00:58:16.036782 kubelet[2545]: I1009 00:58:16.036754 2545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 00:58:16.038859 kubelet[2545]: I1009 00:58:16.038821 2545 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:58:16.041786 kubelet[2545]: E1009 00:58:16.041760 2545 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 00:58:16.041840 kubelet[2545]: I1009 00:58:16.041787 2545 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 00:58:16.049791 kubelet[2545]: I1009 00:58:16.049757 2545 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:58:16.049943 kubelet[2545]: I1009 00:58:16.049903 2545 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 00:58:16.050059 kubelet[2545]: I1009 00:58:16.050034 2545 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:58:16.050252 kubelet[2545]: I1009 00:58:16.050058 2545 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 00:58:16.050336 kubelet[2545]: I1009 00:58:16.050251 2545 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:58:16.050336 kubelet[2545]: I1009 00:58:16.050260 2545 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 00:58:16.050336 kubelet[2545]: I1009 00:58:16.050295 2545 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:16.050429 kubelet[2545]: I1009 00:58:16.050418 2545 kubelet.go:408] "Attempting to sync node with API server" Oct 9 00:58:16.050455 kubelet[2545]: I1009 00:58:16.050433 2545 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:58:16.050503 kubelet[2545]: I1009 00:58:16.050481 2545 kubelet.go:314] "Adding apiserver pod source" Oct 9 00:58:16.050531 kubelet[2545]: I1009 00:58:16.050503 2545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:58:16.051384 kubelet[2545]: I1009 00:58:16.051358 2545 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:58:16.054485 kubelet[2545]: I1009 00:58:16.051699 2545 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:58:16.054485 kubelet[2545]: I1009 00:58:16.052797 2545 server.go:1269] "Started kubelet" Oct 9 00:58:16.054485 kubelet[2545]: I1009 00:58:16.053366 2545 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:58:16.054485 kubelet[2545]: I1009 00:58:16.053634 2545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:58:16.054485 kubelet[2545]: I1009 00:58:16.054431 2545 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:58:16.056435 kubelet[2545]: I1009 00:58:16.056411 2545 server.go:460] "Adding debug handlers to kubelet server" Oct 9 00:58:16.056543 kubelet[2545]: I1009 00:58:16.056517 2545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:58:16.060277 kubelet[2545]: I1009 00:58:16.060243 2545 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 00:58:16.060820 kubelet[2545]: I1009 00:58:16.060761 2545 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 00:58:16.060895 kubelet[2545]: I1009 00:58:16.060856 2545 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 00:58:16.061059 kubelet[2545]: I1009 00:58:16.061038 2545 reconciler.go:26] "Reconciler: start to sync state" Oct 9 00:58:16.064597 kubelet[2545]: I1009 00:58:16.064560 2545 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:58:16.064783 kubelet[2545]: E1009 00:58:16.064753 2545 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:58:16.068617 kubelet[2545]: I1009 00:58:16.068585 2545 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:58:16.068617 kubelet[2545]: I1009 00:58:16.068604 2545 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:58:16.073876 kubelet[2545]: I1009 00:58:16.073815 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:58:16.075132 kubelet[2545]: I1009 00:58:16.075098 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:58:16.075132 kubelet[2545]: I1009 00:58:16.075134 2545 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:58:16.075211 kubelet[2545]: I1009 00:58:16.075154 2545 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 00:58:16.075273 kubelet[2545]: E1009 00:58:16.075231 2545 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:58:16.108488 kubelet[2545]: I1009 00:58:16.108440 2545 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:58:16.108488 kubelet[2545]: I1009 00:58:16.108492 2545 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:58:16.108648 kubelet[2545]: I1009 00:58:16.108516 2545 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:16.108747 kubelet[2545]: I1009 00:58:16.108684 2545 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 00:58:16.108747 kubelet[2545]: I1009 00:58:16.108703 2545 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 00:58:16.108747 kubelet[2545]: I1009 00:58:16.108727 2545 policy_none.go:49] "None policy: Start" Oct 9 00:58:16.109342 kubelet[2545]: I1009 00:58:16.109320 2545 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:58:16.109380 kubelet[2545]: I1009 00:58:16.109350 2545 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:58:16.109544 kubelet[2545]: I1009 00:58:16.109525 2545 state_mem.go:75] "Updated machine memory state" Oct 9 00:58:16.113616 kubelet[2545]: I1009 00:58:16.113556 2545 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:58:16.113917 kubelet[2545]: I1009 00:58:16.113795 2545 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 00:58:16.113917 kubelet[2545]: I1009 00:58:16.113814 2545 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 00:58:16.114394 kubelet[2545]: I1009 00:58:16.114378 2545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:58:16.205050 kubelet[2545]: E1009 00:58:16.204878 2545 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:16.221885 kubelet[2545]: I1009 00:58:16.221841 2545 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:58:16.229637 kubelet[2545]: I1009 00:58:16.228975 2545 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Oct 9 00:58:16.229637 kubelet[2545]: I1009 00:58:16.229051 2545 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 9 00:58:16.261270 kubelet[2545]: I1009 00:58:16.261237 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e69611052a242eb6cd7b80389e8daae3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e69611052a242eb6cd7b80389e8daae3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:16.261270 kubelet[2545]: I1009 00:58:16.261268 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:16.261270 kubelet[2545]: I1009 00:58:16.261286 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:16.261503 kubelet[2545]: I1009 00:58:16.261301 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e69611052a242eb6cd7b80389e8daae3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e69611052a242eb6cd7b80389e8daae3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:16.261503 kubelet[2545]: I1009 00:58:16.261340 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e69611052a242eb6cd7b80389e8daae3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e69611052a242eb6cd7b80389e8daae3\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:16.261503 kubelet[2545]: I1009 00:58:16.261354 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:16.261503 kubelet[2545]: I1009 00:58:16.261369 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:16.261503 kubelet[2545]: I1009 00:58:16.261383 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:16.261658 kubelet[2545]: I1009 00:58:16.261447 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:58:16.503330 kubelet[2545]: E1009 00:58:16.503166 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:16.504479 kubelet[2545]: E1009 00:58:16.504400 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:16.505283 kubelet[2545]: E1009 00:58:16.505221 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:17.051814 kubelet[2545]: I1009 00:58:17.051757 2545 apiserver.go:52] "Watching apiserver" Oct 9 00:58:17.061245 kubelet[2545]: I1009 00:58:17.061186 2545 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 00:58:17.089353 kubelet[2545]: E1009 00:58:17.087797 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:17.089353 kubelet[2545]: E1009 00:58:17.087945 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:17.094725 kubelet[2545]: E1009 00:58:17.094684 2545 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:17.095147 kubelet[2545]: E1009 00:58:17.095067 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:17.106968 kubelet[2545]: I1009 00:58:17.106893 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.10686451 podStartE2EDuration="3.10686451s" podCreationTimestamp="2024-10-09 00:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:17.10625765 +0000 UTC m=+1.118329080" watchObservedRunningTime="2024-10-09 00:58:17.10686451 +0000 UTC m=+1.118935941" Oct 9 00:58:17.117152 kubelet[2545]: I1009 00:58:17.117045 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.117024657 podStartE2EDuration="1.117024657s" podCreationTimestamp="2024-10-09 00:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:17.116979073 +0000 UTC m=+1.129050503" watchObservedRunningTime="2024-10-09 00:58:17.117024657 +0000 UTC m=+1.129096088" Oct 9 00:58:17.124140 kubelet[2545]: I1009 00:58:17.124094 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.1240794 podStartE2EDuration="1.1240794s" podCreationTimestamp="2024-10-09 00:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:17.123285843 +0000 UTC m=+1.135357283" watchObservedRunningTime="2024-10-09 00:58:17.1240794 +0000 UTC m=+1.136150830" Oct 9 00:58:18.089290 kubelet[2545]: E1009 00:58:18.089245 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:21.108981 sudo[1642]: pam_unix(sudo:session): session closed for user root Oct 9 00:58:21.111584 sshd[1638]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:21.117813 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:53422.service: Deactivated successfully. Oct 9 00:58:21.120678 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 00:58:21.120930 systemd[1]: session-7.scope: Consumed 4.777s CPU time, 153.0M memory peak, 0B memory swap peak. Oct 9 00:58:21.121392 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Oct 9 00:58:21.122421 systemd-logind[1445]: Removed session 7. Oct 9 00:58:21.954426 kubelet[2545]: E1009 00:58:21.954342 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:22.096247 kubelet[2545]: E1009 00:58:22.096206 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:22.223418 kubelet[2545]: I1009 00:58:22.223239 2545 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 00:58:22.225491 containerd[1456]: time="2024-10-09T00:58:22.223906232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 00:58:22.225923 kubelet[2545]: I1009 00:58:22.224090 2545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 00:58:22.933196 systemd[1]: Created slice kubepods-besteffort-poddaa15634_c5ae_418e_8d87_4d0e8175fea4.slice - libcontainer container kubepods-besteffort-poddaa15634_c5ae_418e_8d87_4d0e8175fea4.slice. Oct 9 00:58:23.064134 kubelet[2545]: I1009 00:58:23.064051 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/daa15634-c5ae-418e-8d87-4d0e8175fea4-xtables-lock\") pod \"kube-proxy-drsvx\" (UID: \"daa15634-c5ae-418e-8d87-4d0e8175fea4\") " pod="kube-system/kube-proxy-drsvx" Oct 9 00:58:23.064134 kubelet[2545]: I1009 00:58:23.064106 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/daa15634-c5ae-418e-8d87-4d0e8175fea4-lib-modules\") pod \"kube-proxy-drsvx\" (UID: \"daa15634-c5ae-418e-8d87-4d0e8175fea4\") " pod="kube-system/kube-proxy-drsvx" Oct 9 00:58:23.064134 kubelet[2545]: I1009 00:58:23.064132 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/daa15634-c5ae-418e-8d87-4d0e8175fea4-kube-proxy\") pod \"kube-proxy-drsvx\" (UID: \"daa15634-c5ae-418e-8d87-4d0e8175fea4\") " pod="kube-system/kube-proxy-drsvx" Oct 9 00:58:23.064708 kubelet[2545]: I1009 00:58:23.064207 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l269z\" (UniqueName: \"kubernetes.io/projected/daa15634-c5ae-418e-8d87-4d0e8175fea4-kube-api-access-l269z\") pod \"kube-proxy-drsvx\" (UID: \"daa15634-c5ae-418e-8d87-4d0e8175fea4\") " pod="kube-system/kube-proxy-drsvx" Oct 9 00:58:23.241736 kubelet[2545]: E1009 00:58:23.241594 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:23.242406 containerd[1456]: time="2024-10-09T00:58:23.242226920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drsvx,Uid:daa15634-c5ae-418e-8d87-4d0e8175fea4,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:23.268820 containerd[1456]: time="2024-10-09T00:58:23.268686881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:23.268820 containerd[1456]: time="2024-10-09T00:58:23.268765621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:23.268820 containerd[1456]: time="2024-10-09T00:58:23.268782076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.269063 containerd[1456]: time="2024-10-09T00:58:23.268883706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.298649 systemd[1]: Started cri-containerd-7abcb0cc24115ec565ae67adcf273289d46c0a6e9dd8973523bc8bb0e535668b.scope - libcontainer container 7abcb0cc24115ec565ae67adcf273289d46c0a6e9dd8973523bc8bb0e535668b. Oct 9 00:58:23.330607 containerd[1456]: time="2024-10-09T00:58:23.330561733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-drsvx,Uid:daa15634-c5ae-418e-8d87-4d0e8175fea4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7abcb0cc24115ec565ae67adcf273289d46c0a6e9dd8973523bc8bb0e535668b\"" Oct 9 00:58:23.332262 kubelet[2545]: E1009 00:58:23.331715 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:23.334083 containerd[1456]: time="2024-10-09T00:58:23.334031630Z" level=info msg="CreateContainer within sandbox \"7abcb0cc24115ec565ae67adcf273289d46c0a6e9dd8973523bc8bb0e535668b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 00:58:23.577173 systemd[1]: Created slice kubepods-besteffort-pod1a9b18b7_6862_427a_8bf7_132e7ccf853a.slice - libcontainer container kubepods-besteffort-pod1a9b18b7_6862_427a_8bf7_132e7ccf853a.slice. Oct 9 00:58:23.633109 containerd[1456]: time="2024-10-09T00:58:23.633041648Z" level=info msg="CreateContainer within sandbox \"7abcb0cc24115ec565ae67adcf273289d46c0a6e9dd8973523bc8bb0e535668b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47d173317534e85867091e7064377ad3882cdb3f5f378273c7510193bb4f460c\"" Oct 9 00:58:23.633737 containerd[1456]: time="2024-10-09T00:58:23.633700479Z" level=info msg="StartContainer for \"47d173317534e85867091e7064377ad3882cdb3f5f378273c7510193bb4f460c\"" Oct 9 00:58:23.678603 systemd[1]: Started cri-containerd-47d173317534e85867091e7064377ad3882cdb3f5f378273c7510193bb4f460c.scope - libcontainer container 47d173317534e85867091e7064377ad3882cdb3f5f378273c7510193bb4f460c. Oct 9 00:58:23.767069 containerd[1456]: time="2024-10-09T00:58:23.767020051Z" level=info msg="StartContainer for \"47d173317534e85867091e7064377ad3882cdb3f5f378273c7510193bb4f460c\" returns successfully" Oct 9 00:58:23.767506 kubelet[2545]: I1009 00:58:23.767451 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a9b18b7-6862-427a-8bf7-132e7ccf853a-var-lib-calico\") pod \"tigera-operator-55748b469f-2h8g2\" (UID: \"1a9b18b7-6862-427a-8bf7-132e7ccf853a\") " pod="tigera-operator/tigera-operator-55748b469f-2h8g2" Oct 9 00:58:23.767580 kubelet[2545]: I1009 00:58:23.767517 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zvh\" (UniqueName: \"kubernetes.io/projected/1a9b18b7-6862-427a-8bf7-132e7ccf853a-kube-api-access-w4zvh\") pod \"tigera-operator-55748b469f-2h8g2\" (UID: \"1a9b18b7-6862-427a-8bf7-132e7ccf853a\") " pod="tigera-operator/tigera-operator-55748b469f-2h8g2" Oct 9 00:58:23.789146 kubelet[2545]: E1009 00:58:23.789107 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:24.101310 kubelet[2545]: E1009 00:58:24.100617 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:24.101310 kubelet[2545]: E1009 00:58:24.101020 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:24.117553 kubelet[2545]: I1009 00:58:24.117485 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-drsvx" podStartSLOduration=2.117448496 podStartE2EDuration="2.117448496s" podCreationTimestamp="2024-10-09 00:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:24.11725865 +0000 UTC m=+8.129330081" watchObservedRunningTime="2024-10-09 00:58:24.117448496 +0000 UTC m=+8.129519926" Oct 9 00:58:24.179809 containerd[1456]: time="2024-10-09T00:58:24.179750210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-2h8g2,Uid:1a9b18b7-6862-427a-8bf7-132e7ccf853a,Namespace:tigera-operator,Attempt:0,}" Oct 9 00:58:24.207934 containerd[1456]: time="2024-10-09T00:58:24.207810499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:24.207934 containerd[1456]: time="2024-10-09T00:58:24.207880589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:24.207934 containerd[1456]: time="2024-10-09T00:58:24.207893828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:24.208149 containerd[1456]: time="2024-10-09T00:58:24.208016369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:24.226775 systemd[1]: run-containerd-runc-k8s.io-867444cd57e972d6017633c81632d54d3a05cb7b0f9f6147de6c2f1c9535daf5-runc.jEAhLB.mount: Deactivated successfully. Oct 9 00:58:24.236614 systemd[1]: Started cri-containerd-867444cd57e972d6017633c81632d54d3a05cb7b0f9f6147de6c2f1c9535daf5.scope - libcontainer container 867444cd57e972d6017633c81632d54d3a05cb7b0f9f6147de6c2f1c9535daf5. Oct 9 00:58:24.273230 containerd[1456]: time="2024-10-09T00:58:24.273198027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-2h8g2,Uid:1a9b18b7-6862-427a-8bf7-132e7ccf853a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"867444cd57e972d6017633c81632d54d3a05cb7b0f9f6147de6c2f1c9535daf5\"" Oct 9 00:58:24.275199 containerd[1456]: time="2024-10-09T00:58:24.275118306Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 00:58:24.762196 kubelet[2545]: E1009 00:58:24.762121 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:25.102361 kubelet[2545]: E1009 00:58:25.102320 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:25.377517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248595858.mount: Deactivated successfully. Oct 9 00:58:25.853681 containerd[1456]: time="2024-10-09T00:58:25.853613319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:25.879484 containerd[1456]: time="2024-10-09T00:58:25.879402251Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136521" Oct 9 00:58:25.918603 containerd[1456]: time="2024-10-09T00:58:25.918544217Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:25.940918 containerd[1456]: time="2024-10-09T00:58:25.940866827Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:25.941700 containerd[1456]: time="2024-10-09T00:58:25.941661575Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.662255123s" Oct 9 00:58:25.941700 containerd[1456]: time="2024-10-09T00:58:25.941692962Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 00:58:25.943811 containerd[1456]: time="2024-10-09T00:58:25.943781016Z" level=info msg="CreateContainer within sandbox \"867444cd57e972d6017633c81632d54d3a05cb7b0f9f6147de6c2f1c9535daf5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 00:58:26.021824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967080666.mount: Deactivated successfully. Oct 9 00:58:26.092862 containerd[1456]: time="2024-10-09T00:58:26.092812070Z" level=info msg="CreateContainer within sandbox \"867444cd57e972d6017633c81632d54d3a05cb7b0f9f6147de6c2f1c9535daf5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7b0b02ab08e07f382295e805c5f90dc05da0964d587563c4a5398ed07fcd9ce9\"" Oct 9 00:58:26.093209 containerd[1456]: time="2024-10-09T00:58:26.093178692Z" level=info msg="StartContainer for \"7b0b02ab08e07f382295e805c5f90dc05da0964d587563c4a5398ed07fcd9ce9\"" Oct 9 00:58:26.124988 systemd[1]: Started cri-containerd-7b0b02ab08e07f382295e805c5f90dc05da0964d587563c4a5398ed07fcd9ce9.scope - libcontainer container 7b0b02ab08e07f382295e805c5f90dc05da0964d587563c4a5398ed07fcd9ce9. Oct 9 00:58:26.285872 containerd[1456]: time="2024-10-09T00:58:26.285810528Z" level=info msg="StartContainer for \"7b0b02ab08e07f382295e805c5f90dc05da0964d587563c4a5398ed07fcd9ce9\" returns successfully" Oct 9 00:58:26.772983 update_engine[1446]: I20241009 00:58:26.772852 1446 update_attempter.cc:509] Updating boot flags... Oct 9 00:58:26.800517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2940) Oct 9 00:58:26.842509 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2940) Oct 9 00:58:26.880936 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2940) Oct 9 00:58:27.114889 kubelet[2545]: I1009 00:58:27.114819 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-2h8g2" podStartSLOduration=2.4470754230000002 podStartE2EDuration="4.114802511s" podCreationTimestamp="2024-10-09 00:58:23 +0000 UTC" firstStartedPulling="2024-10-09 00:58:24.27476962 +0000 UTC m=+8.286841050" lastFinishedPulling="2024-10-09 00:58:25.942496708 +0000 UTC m=+9.954568138" observedRunningTime="2024-10-09 00:58:27.114660734 +0000 UTC m=+11.126732164" watchObservedRunningTime="2024-10-09 00:58:27.114802511 +0000 UTC m=+11.126873941" Oct 9 00:58:29.077669 systemd[1]: Created slice kubepods-besteffort-podbac1d37a_c37b_4ad2_9e93_db12577dbddc.slice - libcontainer container kubepods-besteffort-podbac1d37a_c37b_4ad2_9e93_db12577dbddc.slice. Oct 9 00:58:29.142391 systemd[1]: Created slice kubepods-besteffort-podd12d715a_5434_461b_88cd_a42435dd9de3.slice - libcontainer container kubepods-besteffort-podd12d715a_5434_461b_88cd_a42435dd9de3.slice. Oct 9 00:58:29.198580 kubelet[2545]: I1009 00:58:29.198511 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bac1d37a-c37b-4ad2-9e93-db12577dbddc-typha-certs\") pod \"calico-typha-6dd98f6d74-7nbmq\" (UID: \"bac1d37a-c37b-4ad2-9e93-db12577dbddc\") " pod="calico-system/calico-typha-6dd98f6d74-7nbmq" Oct 9 00:58:29.198580 kubelet[2545]: I1009 00:58:29.198559 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vpq\" (UniqueName: \"kubernetes.io/projected/bac1d37a-c37b-4ad2-9e93-db12577dbddc-kube-api-access-68vpq\") pod \"calico-typha-6dd98f6d74-7nbmq\" (UID: \"bac1d37a-c37b-4ad2-9e93-db12577dbddc\") " pod="calico-system/calico-typha-6dd98f6d74-7nbmq" Oct 9 00:58:29.198580 kubelet[2545]: I1009 00:58:29.198585 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac1d37a-c37b-4ad2-9e93-db12577dbddc-tigera-ca-bundle\") pod \"calico-typha-6dd98f6d74-7nbmq\" (UID: \"bac1d37a-c37b-4ad2-9e93-db12577dbddc\") " pod="calico-system/calico-typha-6dd98f6d74-7nbmq" Oct 9 00:58:29.225580 kubelet[2545]: E1009 00:58:29.225519 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:29.299419 kubelet[2545]: I1009 00:58:29.299365 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-policysync\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.299419 kubelet[2545]: I1009 00:58:29.299401 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-var-lib-calico\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.299419 kubelet[2545]: I1009 00:58:29.299431 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-xtables-lock\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300513 kubelet[2545]: I1009 00:58:29.299446 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-cni-net-dir\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300513 kubelet[2545]: I1009 00:58:29.299463 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4mcz\" (UniqueName: \"kubernetes.io/projected/d12d715a-5434-461b-88cd-a42435dd9de3-kube-api-access-h4mcz\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300513 kubelet[2545]: I1009 00:58:29.299497 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-cni-bin-dir\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300513 kubelet[2545]: I1009 00:58:29.299513 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-cni-log-dir\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300513 kubelet[2545]: I1009 00:58:29.299625 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d12d715a-5434-461b-88cd-a42435dd9de3-node-certs\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300662 kubelet[2545]: I1009 00:58:29.299732 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-lib-modules\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300662 kubelet[2545]: I1009 00:58:29.299760 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d12d715a-5434-461b-88cd-a42435dd9de3-tigera-ca-bundle\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300662 kubelet[2545]: I1009 00:58:29.299787 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-flexvol-driver-host\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.300662 kubelet[2545]: I1009 00:58:29.299821 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d12d715a-5434-461b-88cd-a42435dd9de3-var-run-calico\") pod \"calico-node-ldsrr\" (UID: \"d12d715a-5434-461b-88cd-a42435dd9de3\") " pod="calico-system/calico-node-ldsrr" Oct 9 00:58:29.385349 kubelet[2545]: E1009 00:58:29.385175 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:29.386007 containerd[1456]: time="2024-10-09T00:58:29.385956025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dd98f6d74-7nbmq,Uid:bac1d37a-c37b-4ad2-9e93-db12577dbddc,Namespace:calico-system,Attempt:0,}" Oct 9 00:58:29.401710 kubelet[2545]: I1009 00:58:29.401652 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0542ea8-f247-4752-8f12-00a7cde64127-varrun\") pod \"csi-node-driver-qj2tn\" (UID: \"c0542ea8-f247-4752-8f12-00a7cde64127\") " pod="calico-system/csi-node-driver-qj2tn" Oct 9 00:58:29.401710 kubelet[2545]: I1009 00:58:29.401701 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkzv\" (UniqueName: \"kubernetes.io/projected/c0542ea8-f247-4752-8f12-00a7cde64127-kube-api-access-4tkzv\") pod \"csi-node-driver-qj2tn\" (UID: \"c0542ea8-f247-4752-8f12-00a7cde64127\") " pod="calico-system/csi-node-driver-qj2tn" Oct 9 00:58:29.401975 kubelet[2545]: I1009 00:58:29.401753 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0542ea8-f247-4752-8f12-00a7cde64127-registration-dir\") pod \"csi-node-driver-qj2tn\" (UID: \"c0542ea8-f247-4752-8f12-00a7cde64127\") " pod="calico-system/csi-node-driver-qj2tn" Oct 9 00:58:29.401975 kubelet[2545]: I1009 00:58:29.401786 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0542ea8-f247-4752-8f12-00a7cde64127-kubelet-dir\") pod \"csi-node-driver-qj2tn\" (UID: \"c0542ea8-f247-4752-8f12-00a7cde64127\") " pod="calico-system/csi-node-driver-qj2tn" Oct 9 00:58:29.401975 kubelet[2545]: I1009 00:58:29.401827 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0542ea8-f247-4752-8f12-00a7cde64127-socket-dir\") pod \"csi-node-driver-qj2tn\" (UID: \"c0542ea8-f247-4752-8f12-00a7cde64127\") " pod="calico-system/csi-node-driver-qj2tn" Oct 9 00:58:29.409499 kubelet[2545]: E1009 00:58:29.403244 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.409499 kubelet[2545]: W1009 00:58:29.403273 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.409499 kubelet[2545]: E1009 00:58:29.403309 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.409499 kubelet[2545]: E1009 00:58:29.403774 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.409499 kubelet[2545]: W1009 00:58:29.403787 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.409499 kubelet[2545]: E1009 00:58:29.403818 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.409499 kubelet[2545]: E1009 00:58:29.404122 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.409499 kubelet[2545]: W1009 00:58:29.404134 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.409499 kubelet[2545]: E1009 00:58:29.404231 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.409499 kubelet[2545]: E1009 00:58:29.404391 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.409848 kubelet[2545]: W1009 00:58:29.404401 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.409848 kubelet[2545]: E1009 00:58:29.404538 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.409848 kubelet[2545]: E1009 00:58:29.404693 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.409848 kubelet[2545]: W1009 00:58:29.404702 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.409848 kubelet[2545]: E1009 00:58:29.404748 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.409848 kubelet[2545]: E1009 00:58:29.404906 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.409848 kubelet[2545]: W1009 00:58:29.404915 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.409848 kubelet[2545]: E1009 00:58:29.404968 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.409848 kubelet[2545]: E1009 00:58:29.405339 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.409848 kubelet[2545]: W1009 00:58:29.405350 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.410074 kubelet[2545]: E1009 00:58:29.405379 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.410074 kubelet[2545]: E1009 00:58:29.407006 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.410074 kubelet[2545]: W1009 00:58:29.407019 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.410074 kubelet[2545]: E1009 00:58:29.407049 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.410074 kubelet[2545]: E1009 00:58:29.407354 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.410074 kubelet[2545]: W1009 00:58:29.407364 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.410074 kubelet[2545]: E1009 00:58:29.407447 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.410074 kubelet[2545]: E1009 00:58:29.407933 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.410074 kubelet[2545]: W1009 00:58:29.407942 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.410074 kubelet[2545]: E1009 00:58:29.407953 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.410290 kubelet[2545]: E1009 00:58:29.408161 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.410290 kubelet[2545]: W1009 00:58:29.408171 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.410290 kubelet[2545]: E1009 00:58:29.408182 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.410290 kubelet[2545]: E1009 00:58:29.408479 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.410290 kubelet[2545]: W1009 00:58:29.408490 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.410290 kubelet[2545]: E1009 00:58:29.408501 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.413278 kubelet[2545]: E1009 00:58:29.413245 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.413278 kubelet[2545]: W1009 00:58:29.413272 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.413378 kubelet[2545]: E1009 00:58:29.413292 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.418046 kubelet[2545]: E1009 00:58:29.418008 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.418046 kubelet[2545]: W1009 00:58:29.418034 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.418148 kubelet[2545]: E1009 00:58:29.418052 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.432139 containerd[1456]: time="2024-10-09T00:58:29.431972672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:29.432290 containerd[1456]: time="2024-10-09T00:58:29.432169979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:29.432290 containerd[1456]: time="2024-10-09T00:58:29.432202717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:29.432452 containerd[1456]: time="2024-10-09T00:58:29.432417170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:29.448040 kubelet[2545]: E1009 00:58:29.447552 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:29.448159 containerd[1456]: time="2024-10-09T00:58:29.448061267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ldsrr,Uid:d12d715a-5434-461b-88cd-a42435dd9de3,Namespace:calico-system,Attempt:0,}" Oct 9 00:58:29.454748 systemd[1]: Started cri-containerd-c3b0da6238a4d4ef90679025a82386c9aeb40980a7103ef812dd9e84add18e33.scope - libcontainer container c3b0da6238a4d4ef90679025a82386c9aeb40980a7103ef812dd9e84add18e33. Oct 9 00:58:29.482306 containerd[1456]: time="2024-10-09T00:58:29.481379796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:29.482306 containerd[1456]: time="2024-10-09T00:58:29.482266920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:29.482306 containerd[1456]: time="2024-10-09T00:58:29.482279436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:29.482535 containerd[1456]: time="2024-10-09T00:58:29.482361646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:29.502658 kubelet[2545]: E1009 00:58:29.502499 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.502658 kubelet[2545]: W1009 00:58:29.502523 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.502658 kubelet[2545]: E1009 00:58:29.502543 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.503510 kubelet[2545]: E1009 00:58:29.502846 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.503510 kubelet[2545]: W1009 00:58:29.502854 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.503510 kubelet[2545]: E1009 00:58:29.503014 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.503510 kubelet[2545]: E1009 00:58:29.503193 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.503510 kubelet[2545]: W1009 00:58:29.503201 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.503510 kubelet[2545]: E1009 00:58:29.503213 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.504343 kubelet[2545]: E1009 00:58:29.503920 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.504343 kubelet[2545]: W1009 00:58:29.503930 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.504343 kubelet[2545]: E1009 00:58:29.503947 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.504417 kubelet[2545]: E1009 00:58:29.504388 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.504417 kubelet[2545]: W1009 00:58:29.504398 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.505073 kubelet[2545]: E1009 00:58:29.504455 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.505073 kubelet[2545]: E1009 00:58:29.504657 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.505073 kubelet[2545]: W1009 00:58:29.504673 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.505205 kubelet[2545]: E1009 00:58:29.505173 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.505288 kubelet[2545]: E1009 00:58:29.505233 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.505288 kubelet[2545]: W1009 00:58:29.505246 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.505382 kubelet[2545]: E1009 00:58:29.505369 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.506589 kubelet[2545]: E1009 00:58:29.506561 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.506589 kubelet[2545]: W1009 00:58:29.506584 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.506938 kubelet[2545]: E1009 00:58:29.506613 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.507046 kubelet[2545]: E1009 00:58:29.507021 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.507046 kubelet[2545]: W1009 00:58:29.507042 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.507232 kubelet[2545]: E1009 00:58:29.507211 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.507354 kubelet[2545]: E1009 00:58:29.507312 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.507354 kubelet[2545]: W1009 00:58:29.507328 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.507354 kubelet[2545]: E1009 00:58:29.507341 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.507627 kubelet[2545]: E1009 00:58:29.507582 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.507627 kubelet[2545]: W1009 00:58:29.507601 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.507745 kubelet[2545]: E1009 00:58:29.507703 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.507834 kubelet[2545]: E1009 00:58:29.507816 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.507834 kubelet[2545]: W1009 00:58:29.507830 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.507964 kubelet[2545]: E1009 00:58:29.507918 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.508050 kubelet[2545]: E1009 00:58:29.508030 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.508050 kubelet[2545]: W1009 00:58:29.508045 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.508126 kubelet[2545]: E1009 00:58:29.508059 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.508407 kubelet[2545]: E1009 00:58:29.508385 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.508407 kubelet[2545]: W1009 00:58:29.508401 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.508531 kubelet[2545]: E1009 00:58:29.508417 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.508700 kubelet[2545]: E1009 00:58:29.508681 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.508700 kubelet[2545]: W1009 00:58:29.508697 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.508921 kubelet[2545]: E1009 00:58:29.508713 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.508992 kubelet[2545]: E1009 00:58:29.508922 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.508992 kubelet[2545]: W1009 00:58:29.508933 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.509138 kubelet[2545]: E1009 00:58:29.508999 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.509337 kubelet[2545]: E1009 00:58:29.509314 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.509337 kubelet[2545]: W1009 00:58:29.509334 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.509418 kubelet[2545]: E1009 00:58:29.509382 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.509721 systemd[1]: Started cri-containerd-dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08.scope - libcontainer container dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08. Oct 9 00:58:29.510304 kubelet[2545]: E1009 00:58:29.510234 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.510304 kubelet[2545]: W1009 00:58:29.510248 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.510368 kubelet[2545]: E1009 00:58:29.510348 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.510706 kubelet[2545]: E1009 00:58:29.510680 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.510706 kubelet[2545]: W1009 00:58:29.510698 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.510802 kubelet[2545]: E1009 00:58:29.510779 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.510950 kubelet[2545]: E1009 00:58:29.510913 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.510992 kubelet[2545]: W1009 00:58:29.510938 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.511111 kubelet[2545]: E1009 00:58:29.511041 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.511337 kubelet[2545]: E1009 00:58:29.511310 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.512608 kubelet[2545]: W1009 00:58:29.512577 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.513597 kubelet[2545]: E1009 00:58:29.512936 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.513597 kubelet[2545]: E1009 00:58:29.513104 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.513597 kubelet[2545]: W1009 00:58:29.513113 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.513597 kubelet[2545]: E1009 00:58:29.513222 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.513753 kubelet[2545]: E1009 00:58:29.513719 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.513753 kubelet[2545]: W1009 00:58:29.513738 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.514081 kubelet[2545]: E1009 00:58:29.514045 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.514231 kubelet[2545]: E1009 00:58:29.514209 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.514231 kubelet[2545]: W1009 00:58:29.514226 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.514526 kubelet[2545]: E1009 00:58:29.514498 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.515589 kubelet[2545]: E1009 00:58:29.515565 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.515589 kubelet[2545]: W1009 00:58:29.515584 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.515662 kubelet[2545]: E1009 00:58:29.515595 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.518988 kubelet[2545]: E1009 00:58:29.518892 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:29.518988 kubelet[2545]: W1009 00:58:29.518915 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:29.518988 kubelet[2545]: E1009 00:58:29.518942 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:29.520106 containerd[1456]: time="2024-10-09T00:58:29.520056141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dd98f6d74-7nbmq,Uid:bac1d37a-c37b-4ad2-9e93-db12577dbddc,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3b0da6238a4d4ef90679025a82386c9aeb40980a7103ef812dd9e84add18e33\"" Oct 9 00:58:29.522492 kubelet[2545]: E1009 00:58:29.522158 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:29.525734 containerd[1456]: time="2024-10-09T00:58:29.525418545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 00:58:29.548970 containerd[1456]: time="2024-10-09T00:58:29.548902205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ldsrr,Uid:d12d715a-5434-461b-88cd-a42435dd9de3,Namespace:calico-system,Attempt:0,} returns sandbox id \"dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08\"" Oct 9 00:58:29.550646 kubelet[2545]: E1009 00:58:29.550168 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:31.076315 kubelet[2545]: E1009 00:58:31.076254 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:32.483752 containerd[1456]: time="2024-10-09T00:58:32.483693609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:32.484621 containerd[1456]: time="2024-10-09T00:58:32.484544008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 00:58:32.485749 containerd[1456]: time="2024-10-09T00:58:32.485703616Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:32.487831 containerd[1456]: time="2024-10-09T00:58:32.487795910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:32.488490 containerd[1456]: time="2024-10-09T00:58:32.488444248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.962983877s" Oct 9 00:58:32.488490 containerd[1456]: time="2024-10-09T00:58:32.488481424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 00:58:32.495547 containerd[1456]: time="2024-10-09T00:58:32.495353156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 00:58:32.518605 containerd[1456]: time="2024-10-09T00:58:32.518558448Z" level=info msg="CreateContainer within sandbox \"c3b0da6238a4d4ef90679025a82386c9aeb40980a7103ef812dd9e84add18e33\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 00:58:32.533664 containerd[1456]: time="2024-10-09T00:58:32.533624733Z" level=info msg="CreateContainer within sandbox \"c3b0da6238a4d4ef90679025a82386c9aeb40980a7103ef812dd9e84add18e33\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"053333d3a6db506b3cdcf54a0c2d2b66e0b15601631458cd0e950fda7e43dbb3\"" Oct 9 00:58:32.536256 containerd[1456]: time="2024-10-09T00:58:32.536227044Z" level=info msg="StartContainer for \"053333d3a6db506b3cdcf54a0c2d2b66e0b15601631458cd0e950fda7e43dbb3\"" Oct 9 00:58:32.571618 systemd[1]: Started cri-containerd-053333d3a6db506b3cdcf54a0c2d2b66e0b15601631458cd0e950fda7e43dbb3.scope - libcontainer container 053333d3a6db506b3cdcf54a0c2d2b66e0b15601631458cd0e950fda7e43dbb3. Oct 9 00:58:32.617135 containerd[1456]: time="2024-10-09T00:58:32.617083989Z" level=info msg="StartContainer for \"053333d3a6db506b3cdcf54a0c2d2b66e0b15601631458cd0e950fda7e43dbb3\" returns successfully" Oct 9 00:58:33.084285 kubelet[2545]: E1009 00:58:33.084218 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:33.138410 kubelet[2545]: E1009 00:58:33.138362 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:33.154655 kubelet[2545]: I1009 00:58:33.154586 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dd98f6d74-7nbmq" podStartSLOduration=1.182671462 podStartE2EDuration="4.154570207s" podCreationTimestamp="2024-10-09 00:58:29 +0000 UTC" firstStartedPulling="2024-10-09 00:58:29.523085101 +0000 UTC m=+13.535156531" lastFinishedPulling="2024-10-09 00:58:32.494983846 +0000 UTC m=+16.507055276" observedRunningTime="2024-10-09 00:58:33.153373997 +0000 UTC m=+17.165445427" watchObservedRunningTime="2024-10-09 00:58:33.154570207 +0000 UTC m=+17.166641637" Oct 9 00:58:33.226614 kubelet[2545]: E1009 00:58:33.226560 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.226614 kubelet[2545]: W1009 00:58:33.226605 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.226614 kubelet[2545]: E1009 00:58:33.226638 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.226920 kubelet[2545]: E1009 00:58:33.226906 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.226920 kubelet[2545]: W1009 00:58:33.226916 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.227198 kubelet[2545]: E1009 00:58:33.226925 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.227198 kubelet[2545]: E1009 00:58:33.227179 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.227198 kubelet[2545]: W1009 00:58:33.227188 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.227198 kubelet[2545]: E1009 00:58:33.227197 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.227451 kubelet[2545]: E1009 00:58:33.227437 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.227451 kubelet[2545]: W1009 00:58:33.227448 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.227564 kubelet[2545]: E1009 00:58:33.227458 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.227708 kubelet[2545]: E1009 00:58:33.227694 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.227708 kubelet[2545]: W1009 00:58:33.227704 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.227788 kubelet[2545]: E1009 00:58:33.227715 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.227914 kubelet[2545]: E1009 00:58:33.227892 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.227914 kubelet[2545]: W1009 00:58:33.227902 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.227914 kubelet[2545]: E1009 00:58:33.227909 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.228123 kubelet[2545]: E1009 00:58:33.228110 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.228123 kubelet[2545]: W1009 00:58:33.228120 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.228189 kubelet[2545]: E1009 00:58:33.228129 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.228557 kubelet[2545]: E1009 00:58:33.228540 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.228557 kubelet[2545]: W1009 00:58:33.228552 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.228647 kubelet[2545]: E1009 00:58:33.228563 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.228854 kubelet[2545]: E1009 00:58:33.228822 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.228953 kubelet[2545]: W1009 00:58:33.228930 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.228953 kubelet[2545]: E1009 00:58:33.228951 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.229246 kubelet[2545]: E1009 00:58:33.229218 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.229246 kubelet[2545]: W1009 00:58:33.229231 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.229246 kubelet[2545]: E1009 00:58:33.229244 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.229522 kubelet[2545]: E1009 00:58:33.229503 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.229522 kubelet[2545]: W1009 00:58:33.229519 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.229617 kubelet[2545]: E1009 00:58:33.229534 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.229772 kubelet[2545]: E1009 00:58:33.229744 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.229772 kubelet[2545]: W1009 00:58:33.229768 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.229871 kubelet[2545]: E1009 00:58:33.229783 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.230029 kubelet[2545]: E1009 00:58:33.230011 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.230029 kubelet[2545]: W1009 00:58:33.230025 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.230100 kubelet[2545]: E1009 00:58:33.230037 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.230276 kubelet[2545]: E1009 00:58:33.230258 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.230276 kubelet[2545]: W1009 00:58:33.230273 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.230377 kubelet[2545]: E1009 00:58:33.230284 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.230587 kubelet[2545]: E1009 00:58:33.230544 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.230587 kubelet[2545]: W1009 00:58:33.230560 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.230587 kubelet[2545]: E1009 00:58:33.230574 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.234809 kubelet[2545]: E1009 00:58:33.234788 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.234809 kubelet[2545]: W1009 00:58:33.234800 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.234809 kubelet[2545]: E1009 00:58:33.234811 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.235057 kubelet[2545]: E1009 00:58:33.235035 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.235057 kubelet[2545]: W1009 00:58:33.235046 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.235123 kubelet[2545]: E1009 00:58:33.235060 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.235459 kubelet[2545]: E1009 00:58:33.235412 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.235459 kubelet[2545]: W1009 00:58:33.235450 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.235591 kubelet[2545]: E1009 00:58:33.235505 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.235752 kubelet[2545]: E1009 00:58:33.235727 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.235752 kubelet[2545]: W1009 00:58:33.235739 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.235817 kubelet[2545]: E1009 00:58:33.235754 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.235984 kubelet[2545]: E1009 00:58:33.235958 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.235984 kubelet[2545]: W1009 00:58:33.235970 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.235984 kubelet[2545]: E1009 00:58:33.235984 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.236259 kubelet[2545]: E1009 00:58:33.236233 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.236259 kubelet[2545]: W1009 00:58:33.236245 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.236259 kubelet[2545]: E1009 00:58:33.236260 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.236542 kubelet[2545]: E1009 00:58:33.236524 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.236542 kubelet[2545]: W1009 00:58:33.236538 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.236636 kubelet[2545]: E1009 00:58:33.236552 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.236767 kubelet[2545]: E1009 00:58:33.236749 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.236767 kubelet[2545]: W1009 00:58:33.236762 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.236825 kubelet[2545]: E1009 00:58:33.236797 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.236981 kubelet[2545]: E1009 00:58:33.236955 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.236981 kubelet[2545]: W1009 00:58:33.236968 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.237052 kubelet[2545]: E1009 00:58:33.237000 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.237225 kubelet[2545]: E1009 00:58:33.237195 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.237225 kubelet[2545]: W1009 00:58:33.237207 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.237302 kubelet[2545]: E1009 00:58:33.237230 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.237460 kubelet[2545]: E1009 00:58:33.237442 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.237517 kubelet[2545]: W1009 00:58:33.237481 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.237517 kubelet[2545]: E1009 00:58:33.237496 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.237796 kubelet[2545]: E1009 00:58:33.237776 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.237796 kubelet[2545]: W1009 00:58:33.237792 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.237875 kubelet[2545]: E1009 00:58:33.237813 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.238106 kubelet[2545]: E1009 00:58:33.238087 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.238106 kubelet[2545]: W1009 00:58:33.238100 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.238197 kubelet[2545]: E1009 00:58:33.238119 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.238395 kubelet[2545]: E1009 00:58:33.238377 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.238395 kubelet[2545]: W1009 00:58:33.238390 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.238501 kubelet[2545]: E1009 00:58:33.238403 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.238635 kubelet[2545]: E1009 00:58:33.238615 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.238635 kubelet[2545]: W1009 00:58:33.238630 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.238712 kubelet[2545]: E1009 00:58:33.238649 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.238922 kubelet[2545]: E1009 00:58:33.238905 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.238922 kubelet[2545]: W1009 00:58:33.238917 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.238989 kubelet[2545]: E1009 00:58:33.238933 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.239269 kubelet[2545]: E1009 00:58:33.239254 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.239269 kubelet[2545]: W1009 00:58:33.239265 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.239356 kubelet[2545]: E1009 00:58:33.239277 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:33.239528 kubelet[2545]: E1009 00:58:33.239510 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:58:33.239528 kubelet[2545]: W1009 00:58:33.239523 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:58:33.239584 kubelet[2545]: E1009 00:58:33.239534 2545 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:58:34.072554 containerd[1456]: time="2024-10-09T00:58:34.072460848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:34.073290 containerd[1456]: time="2024-10-09T00:58:34.073260520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 00:58:34.074403 containerd[1456]: time="2024-10-09T00:58:34.074374774Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:34.076813 containerd[1456]: time="2024-10-09T00:58:34.076771521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:34.077239 containerd[1456]: time="2024-10-09T00:58:34.077210926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.5818236s" Oct 9 00:58:34.077274 containerd[1456]: time="2024-10-09T00:58:34.077247940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 00:58:34.097005 containerd[1456]: time="2024-10-09T00:58:34.096952358Z" level=info msg="CreateContainer within sandbox \"dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 00:58:34.113722 containerd[1456]: time="2024-10-09T00:58:34.113604398Z" level=info msg="CreateContainer within sandbox \"dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83\"" Oct 9 00:58:34.114484 containerd[1456]: time="2024-10-09T00:58:34.114419610Z" level=info msg="StartContainer for \"412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83\"" Oct 9 00:58:34.137196 kubelet[2545]: I1009 00:58:34.136981 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 00:58:34.137774 kubelet[2545]: E1009 00:58:34.137373 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:34.153642 systemd[1]: Started cri-containerd-412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83.scope - libcontainer container 412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83. Oct 9 00:58:34.200614 containerd[1456]: time="2024-10-09T00:58:34.200533897Z" level=info msg="StartContainer for \"412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83\" returns successfully" Oct 9 00:58:34.202939 systemd[1]: cri-containerd-412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83.scope: Deactivated successfully. Oct 9 00:58:34.500982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83-rootfs.mount: Deactivated successfully. Oct 9 00:58:34.531623 containerd[1456]: time="2024-10-09T00:58:34.531543854Z" level=info msg="shim disconnected" id=412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83 namespace=k8s.io Oct 9 00:58:34.531623 containerd[1456]: time="2024-10-09T00:58:34.531607983Z" level=warning msg="cleaning up after shim disconnected" id=412232f84187655b0b5a1fafc517740c86d57d88f7f957fd17615516f626bd83 namespace=k8s.io Oct 9 00:58:34.531623 containerd[1456]: time="2024-10-09T00:58:34.531616570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:58:35.075984 kubelet[2545]: E1009 00:58:35.075934 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:35.139282 kubelet[2545]: E1009 00:58:35.139247 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:35.140452 containerd[1456]: time="2024-10-09T00:58:35.140175471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 00:58:35.872698 kubelet[2545]: I1009 00:58:35.872653 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 00:58:35.873127 kubelet[2545]: E1009 00:58:35.873080 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:36.140341 kubelet[2545]: E1009 00:58:36.140231 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:37.076665 kubelet[2545]: E1009 00:58:37.076573 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:39.075490 kubelet[2545]: E1009 00:58:39.075423 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:39.624298 containerd[1456]: time="2024-10-09T00:58:39.624239878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:39.624923 containerd[1456]: time="2024-10-09T00:58:39.624871897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 00:58:39.626075 containerd[1456]: time="2024-10-09T00:58:39.626041327Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:39.628226 containerd[1456]: time="2024-10-09T00:58:39.628195434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:39.628820 containerd[1456]: time="2024-10-09T00:58:39.628795360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.488557264s" Oct 9 00:58:39.628892 containerd[1456]: time="2024-10-09T00:58:39.628825299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 00:58:39.630939 containerd[1456]: time="2024-10-09T00:58:39.630669201Z" level=info msg="CreateContainer within sandbox \"dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 00:58:39.645778 containerd[1456]: time="2024-10-09T00:58:39.645740000Z" level=info msg="CreateContainer within sandbox \"dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828\"" Oct 9 00:58:39.646258 containerd[1456]: time="2024-10-09T00:58:39.646223415Z" level=info msg="StartContainer for \"c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828\"" Oct 9 00:58:39.673171 systemd[1]: run-containerd-runc-k8s.io-c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828-runc.z1h8Vp.mount: Deactivated successfully. Oct 9 00:58:39.687604 systemd[1]: Started cri-containerd-c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828.scope - libcontainer container c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828. Oct 9 00:58:39.720747 containerd[1456]: time="2024-10-09T00:58:39.720702016Z" level=info msg="StartContainer for \"c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828\" returns successfully" Oct 9 00:58:40.153495 kubelet[2545]: E1009 00:58:40.151177 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:40.866394 containerd[1456]: time="2024-10-09T00:58:40.866334917Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:58:40.869558 systemd[1]: cri-containerd-c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828.scope: Deactivated successfully. Oct 9 00:58:40.891962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828-rootfs.mount: Deactivated successfully. Oct 9 00:58:40.893841 kubelet[2545]: I1009 00:58:40.893813 2545 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 9 00:58:40.927114 systemd[1]: Created slice kubepods-burstable-pod7d24239d_9728_43ad_8e3b_4764e51d2b6a.slice - libcontainer container kubepods-burstable-pod7d24239d_9728_43ad_8e3b_4764e51d2b6a.slice. Oct 9 00:58:40.935322 systemd[1]: Created slice kubepods-burstable-podef261157_71d2_45a1_a374_251f204d107e.slice - libcontainer container kubepods-burstable-podef261157_71d2_45a1_a374_251f204d107e.slice. Oct 9 00:58:40.940240 systemd[1]: Created slice kubepods-besteffort-pod1d965255_e379_49fc_868c_682e1ba70749.slice - libcontainer container kubepods-besteffort-pod1d965255_e379_49fc_868c_682e1ba70749.slice. Oct 9 00:58:40.989042 kubelet[2545]: I1009 00:58:40.988968 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s88bl\" (UniqueName: \"kubernetes.io/projected/ef261157-71d2-45a1-a374-251f204d107e-kube-api-access-s88bl\") pod \"coredns-6f6b679f8f-cf4wc\" (UID: \"ef261157-71d2-45a1-a374-251f204d107e\") " pod="kube-system/coredns-6f6b679f8f-cf4wc" Oct 9 00:58:40.989042 kubelet[2545]: I1009 00:58:40.989033 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-655xd\" (UniqueName: \"kubernetes.io/projected/7d24239d-9728-43ad-8e3b-4764e51d2b6a-kube-api-access-655xd\") pod \"coredns-6f6b679f8f-4dwfj\" (UID: \"7d24239d-9728-43ad-8e3b-4764e51d2b6a\") " pod="kube-system/coredns-6f6b679f8f-4dwfj" Oct 9 00:58:40.989348 kubelet[2545]: I1009 00:58:40.989079 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d24239d-9728-43ad-8e3b-4764e51d2b6a-config-volume\") pod \"coredns-6f6b679f8f-4dwfj\" (UID: \"7d24239d-9728-43ad-8e3b-4764e51d2b6a\") " pod="kube-system/coredns-6f6b679f8f-4dwfj" Oct 9 00:58:40.989348 kubelet[2545]: I1009 00:58:40.989140 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d965255-e379-49fc-868c-682e1ba70749-tigera-ca-bundle\") pod \"calico-kube-controllers-584f4895f9-dcjnl\" (UID: \"1d965255-e379-49fc-868c-682e1ba70749\") " pod="calico-system/calico-kube-controllers-584f4895f9-dcjnl" Oct 9 00:58:40.989348 kubelet[2545]: I1009 00:58:40.989230 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spbl\" (UniqueName: \"kubernetes.io/projected/1d965255-e379-49fc-868c-682e1ba70749-kube-api-access-7spbl\") pod \"calico-kube-controllers-584f4895f9-dcjnl\" (UID: \"1d965255-e379-49fc-868c-682e1ba70749\") " pod="calico-system/calico-kube-controllers-584f4895f9-dcjnl" Oct 9 00:58:40.989348 kubelet[2545]: I1009 00:58:40.989266 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef261157-71d2-45a1-a374-251f204d107e-config-volume\") pod \"coredns-6f6b679f8f-cf4wc\" (UID: \"ef261157-71d2-45a1-a374-251f204d107e\") " pod="kube-system/coredns-6f6b679f8f-cf4wc" Oct 9 00:58:41.082168 systemd[1]: Created slice kubepods-besteffort-podc0542ea8_f247_4752_8f12_00a7cde64127.slice - libcontainer container kubepods-besteffort-podc0542ea8_f247_4752_8f12_00a7cde64127.slice. Oct 9 00:58:41.100999 containerd[1456]: time="2024-10-09T00:58:41.100623511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qj2tn,Uid:c0542ea8-f247-4752-8f12-00a7cde64127,Namespace:calico-system,Attempt:0,}" Oct 9 00:58:41.152698 kubelet[2545]: E1009 00:58:41.152578 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:41.208024 containerd[1456]: time="2024-10-09T00:58:41.207908969Z" level=info msg="shim disconnected" id=c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828 namespace=k8s.io Oct 9 00:58:41.208024 containerd[1456]: time="2024-10-09T00:58:41.207979367Z" level=warning msg="cleaning up after shim disconnected" id=c424a8c19a2df9e95a3cafc8ff6659e8c1d1f9b096f52af3abfb618f870f5828 namespace=k8s.io Oct 9 00:58:41.208024 containerd[1456]: time="2024-10-09T00:58:41.207987433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:58:41.232764 kubelet[2545]: E1009 00:58:41.232715 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:41.233575 containerd[1456]: time="2024-10-09T00:58:41.233265691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4dwfj,Uid:7d24239d-9728-43ad-8e3b-4764e51d2b6a,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:41.238861 kubelet[2545]: E1009 00:58:41.238826 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:41.239407 containerd[1456]: time="2024-10-09T00:58:41.239367033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cf4wc,Uid:ef261157-71d2-45a1-a374-251f204d107e,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:41.244401 containerd[1456]: time="2024-10-09T00:58:41.244350098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584f4895f9-dcjnl,Uid:1d965255-e379-49fc-868c-682e1ba70749,Namespace:calico-system,Attempt:0,}" Oct 9 00:58:41.323450 containerd[1456]: time="2024-10-09T00:58:41.323384839Z" level=error msg="Failed to destroy network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.324215 containerd[1456]: time="2024-10-09T00:58:41.323947534Z" level=error msg="encountered an error cleaning up failed sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.324215 containerd[1456]: time="2024-10-09T00:58:41.324015207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4dwfj,Uid:7d24239d-9728-43ad-8e3b-4764e51d2b6a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.324327 kubelet[2545]: E1009 00:58:41.324271 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.324396 kubelet[2545]: E1009 00:58:41.324371 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4dwfj" Oct 9 00:58:41.324433 kubelet[2545]: E1009 00:58:41.324403 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4dwfj" Oct 9 00:58:41.324521 kubelet[2545]: E1009 00:58:41.324458 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4dwfj_kube-system(7d24239d-9728-43ad-8e3b-4764e51d2b6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4dwfj_kube-system(7d24239d-9728-43ad-8e3b-4764e51d2b6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4dwfj" podUID="7d24239d-9728-43ad-8e3b-4764e51d2b6a" Oct 9 00:58:41.326245 containerd[1456]: time="2024-10-09T00:58:41.326137585Z" level=error msg="Failed to destroy network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.327133 containerd[1456]: time="2024-10-09T00:58:41.327095626Z" level=error msg="encountered an error cleaning up failed sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.327184 containerd[1456]: time="2024-10-09T00:58:41.327156646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qj2tn,Uid:c0542ea8-f247-4752-8f12-00a7cde64127,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.328573 kubelet[2545]: E1009 00:58:41.328006 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.328573 kubelet[2545]: E1009 00:58:41.328080 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qj2tn" Oct 9 00:58:41.328573 kubelet[2545]: E1009 00:58:41.328102 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qj2tn" Oct 9 00:58:41.328686 kubelet[2545]: E1009 00:58:41.328142 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qj2tn_calico-system(c0542ea8-f247-4752-8f12-00a7cde64127)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qj2tn_calico-system(c0542ea8-f247-4752-8f12-00a7cde64127)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:41.339346 containerd[1456]: time="2024-10-09T00:58:41.339285243Z" level=error msg="Failed to destroy network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.339725 containerd[1456]: time="2024-10-09T00:58:41.339689877Z" level=error msg="encountered an error cleaning up failed sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.339773 containerd[1456]: time="2024-10-09T00:58:41.339754334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584f4895f9-dcjnl,Uid:1d965255-e379-49fc-868c-682e1ba70749,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.339994 kubelet[2545]: E1009 00:58:41.339950 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.340033 kubelet[2545]: E1009 00:58:41.340002 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-584f4895f9-dcjnl" Oct 9 00:58:41.340033 kubelet[2545]: E1009 00:58:41.340020 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-584f4895f9-dcjnl" Oct 9 00:58:41.340093 kubelet[2545]: E1009 00:58:41.340055 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-584f4895f9-dcjnl_calico-system(1d965255-e379-49fc-868c-682e1ba70749)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-584f4895f9-dcjnl_calico-system(1d965255-e379-49fc-868c-682e1ba70749)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-584f4895f9-dcjnl" podUID="1d965255-e379-49fc-868c-682e1ba70749" Oct 9 00:58:41.353081 containerd[1456]: time="2024-10-09T00:58:41.353025575Z" level=error msg="Failed to destroy network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.353523 containerd[1456]: time="2024-10-09T00:58:41.353461992Z" level=error msg="encountered an error cleaning up failed sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.353523 containerd[1456]: time="2024-10-09T00:58:41.353536879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cf4wc,Uid:ef261157-71d2-45a1-a374-251f204d107e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.353763 kubelet[2545]: E1009 00:58:41.353717 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:41.353763 kubelet[2545]: E1009 00:58:41.353755 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-cf4wc" Oct 9 00:58:41.353845 kubelet[2545]: E1009 00:58:41.353772 2545 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-cf4wc" Oct 9 00:58:41.353845 kubelet[2545]: E1009 00:58:41.353807 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-cf4wc_kube-system(ef261157-71d2-45a1-a374-251f204d107e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-cf4wc_kube-system(ef261157-71d2-45a1-a374-251f204d107e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-cf4wc" podUID="ef261157-71d2-45a1-a374-251f204d107e" Oct 9 00:58:41.892866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3-shm.mount: Deactivated successfully. Oct 9 00:58:42.155066 kubelet[2545]: I1009 00:58:42.154933 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:58:42.155834 containerd[1456]: time="2024-10-09T00:58:42.155778276Z" level=info msg="StopPodSandbox for \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\"" Oct 9 00:58:42.156376 containerd[1456]: time="2024-10-09T00:58:42.156034007Z" level=info msg="Ensure that sandbox c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135 in task-service has been cleanup successfully" Oct 9 00:58:42.157603 kubelet[2545]: I1009 00:58:42.157230 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:58:42.157769 containerd[1456]: time="2024-10-09T00:58:42.157745147Z" level=info msg="StopPodSandbox for \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\"" Oct 9 00:58:42.158044 containerd[1456]: time="2024-10-09T00:58:42.158004826Z" level=info msg="Ensure that sandbox 20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3 in task-service has been cleanup successfully" Oct 9 00:58:42.162344 kubelet[2545]: E1009 00:58:42.162311 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:42.163296 containerd[1456]: time="2024-10-09T00:58:42.163256581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 00:58:42.164763 kubelet[2545]: I1009 00:58:42.164208 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:58:42.165594 containerd[1456]: time="2024-10-09T00:58:42.164929777Z" level=info msg="StopPodSandbox for \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\"" Oct 9 00:58:42.165594 containerd[1456]: time="2024-10-09T00:58:42.165237870Z" level=info msg="Ensure that sandbox 84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb in task-service has been cleanup successfully" Oct 9 00:58:42.167913 kubelet[2545]: I1009 00:58:42.167882 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:58:42.168321 containerd[1456]: time="2024-10-09T00:58:42.168295968Z" level=info msg="StopPodSandbox for \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\"" Oct 9 00:58:42.168500 containerd[1456]: time="2024-10-09T00:58:42.168486041Z" level=info msg="Ensure that sandbox 6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9 in task-service has been cleanup successfully" Oct 9 00:58:42.206391 containerd[1456]: time="2024-10-09T00:58:42.206326269Z" level=error msg="StopPodSandbox for \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\" failed" error="failed to destroy network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:42.207118 kubelet[2545]: E1009 00:58:42.206907 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:58:42.207118 kubelet[2545]: E1009 00:58:42.206973 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135"} Oct 9 00:58:42.207118 kubelet[2545]: E1009 00:58:42.207057 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d24239d-9728-43ad-8e3b-4764e51d2b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:58:42.207118 kubelet[2545]: E1009 00:58:42.207081 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d24239d-9728-43ad-8e3b-4764e51d2b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4dwfj" podUID="7d24239d-9728-43ad-8e3b-4764e51d2b6a" Oct 9 00:58:42.209644 containerd[1456]: time="2024-10-09T00:58:42.209616512Z" level=error msg="StopPodSandbox for \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\" failed" error="failed to destroy network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:42.209924 kubelet[2545]: E1009 00:58:42.209812 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:58:42.209924 kubelet[2545]: E1009 00:58:42.209844 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3"} Oct 9 00:58:42.209924 kubelet[2545]: E1009 00:58:42.209870 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0542ea8-f247-4752-8f12-00a7cde64127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:58:42.209924 kubelet[2545]: E1009 00:58:42.209892 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0542ea8-f247-4752-8f12-00a7cde64127\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qj2tn" podUID="c0542ea8-f247-4752-8f12-00a7cde64127" Oct 9 00:58:42.212761 containerd[1456]: time="2024-10-09T00:58:42.212685692Z" level=error msg="StopPodSandbox for \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\" failed" error="failed to destroy network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:42.213015 kubelet[2545]: E1009 00:58:42.212968 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:58:42.213083 kubelet[2545]: E1009 00:58:42.213044 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9"} Oct 9 00:58:42.213135 kubelet[2545]: E1009 00:58:42.213088 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef261157-71d2-45a1-a374-251f204d107e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:58:42.213135 kubelet[2545]: E1009 00:58:42.213114 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef261157-71d2-45a1-a374-251f204d107e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-cf4wc" podUID="ef261157-71d2-45a1-a374-251f204d107e" Oct 9 00:58:42.216486 containerd[1456]: time="2024-10-09T00:58:42.216431556Z" level=error msg="StopPodSandbox for \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\" failed" error="failed to destroy network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:58:42.216672 kubelet[2545]: E1009 00:58:42.216635 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:58:42.216749 kubelet[2545]: E1009 00:58:42.216679 2545 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb"} Oct 9 00:58:42.216749 kubelet[2545]: E1009 00:58:42.216710 2545 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d965255-e379-49fc-868c-682e1ba70749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:58:42.216749 kubelet[2545]: E1009 00:58:42.216730 2545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d965255-e379-49fc-868c-682e1ba70749\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-584f4895f9-dcjnl" podUID="1d965255-e379-49fc-868c-682e1ba70749" Oct 9 00:58:45.866243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630658891.mount: Deactivated successfully. Oct 9 00:58:46.628187 containerd[1456]: time="2024-10-09T00:58:46.628136024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:46.629620 containerd[1456]: time="2024-10-09T00:58:46.629000776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 00:58:46.630443 containerd[1456]: time="2024-10-09T00:58:46.630409841Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:46.633281 containerd[1456]: time="2024-10-09T00:58:46.633204653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:46.633987 containerd[1456]: time="2024-10-09T00:58:46.633957336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.470663783s" Oct 9 00:58:46.634100 containerd[1456]: time="2024-10-09T00:58:46.633991052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 00:58:46.644145 containerd[1456]: time="2024-10-09T00:58:46.644097200Z" level=info msg="CreateContainer within sandbox \"dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 00:58:46.672029 containerd[1456]: time="2024-10-09T00:58:46.671984448Z" level=info msg="CreateContainer within sandbox \"dedb5a89890f88e30e0be2f929829b90791b4b0ed20df0a7c3b9d3551a878d08\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5cc96018b85619bbaa6d70e0de5d0e09682123a06b85d11e5a4ba1fa54c1935b\"" Oct 9 00:58:46.672634 containerd[1456]: time="2024-10-09T00:58:46.672608617Z" level=info msg="StartContainer for \"5cc96018b85619bbaa6d70e0de5d0e09682123a06b85d11e5a4ba1fa54c1935b\"" Oct 9 00:58:46.745654 systemd[1]: Started cri-containerd-5cc96018b85619bbaa6d70e0de5d0e09682123a06b85d11e5a4ba1fa54c1935b.scope - libcontainer container 5cc96018b85619bbaa6d70e0de5d0e09682123a06b85d11e5a4ba1fa54c1935b. Oct 9 00:58:46.842724 containerd[1456]: time="2024-10-09T00:58:46.842661455Z" level=info msg="StartContainer for \"5cc96018b85619bbaa6d70e0de5d0e09682123a06b85d11e5a4ba1fa54c1935b\" returns successfully" Oct 9 00:58:46.865587 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 00:58:46.865700 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 00:58:47.524500 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:56112.service - OpenSSH per-connection server daemon (10.0.0.1:56112). Oct 9 00:58:47.568705 sshd[3624]: Accepted publickey for core from 10.0.0.1 port 56112 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:58:47.570568 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:47.575048 systemd-logind[1445]: New session 8 of user core. Oct 9 00:58:47.582595 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 00:58:47.621615 kubelet[2545]: E1009 00:58:47.621352 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:47.636677 kubelet[2545]: I1009 00:58:47.636395 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ldsrr" podStartSLOduration=1.552826188 podStartE2EDuration="18.636374142s" podCreationTimestamp="2024-10-09 00:58:29 +0000 UTC" firstStartedPulling="2024-10-09 00:58:29.551438818 +0000 UTC m=+13.563510249" lastFinishedPulling="2024-10-09 00:58:46.634986773 +0000 UTC m=+30.647058203" observedRunningTime="2024-10-09 00:58:47.635991189 +0000 UTC m=+31.648062619" watchObservedRunningTime="2024-10-09 00:58:47.636374142 +0000 UTC m=+31.648445572" Oct 9 00:58:47.740412 sshd[3624]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:47.744972 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:56112.service: Deactivated successfully. Oct 9 00:58:47.747289 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 00:58:47.748047 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Oct 9 00:58:47.749074 systemd-logind[1445]: Removed session 8. Oct 9 00:58:48.446509 kernel: bpftool[3792]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 00:58:48.623172 kubelet[2545]: E1009 00:58:48.623127 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:48.728731 systemd-networkd[1387]: vxlan.calico: Link UP Oct 9 00:58:48.728741 systemd-networkd[1387]: vxlan.calico: Gained carrier Oct 9 00:58:50.445636 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Oct 9 00:58:52.751673 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:56122.service - OpenSSH per-connection server daemon (10.0.0.1:56122). Oct 9 00:58:52.799796 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 56122 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:58:52.801760 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:52.806047 systemd-logind[1445]: New session 9 of user core. Oct 9 00:58:52.814608 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 00:58:52.947917 sshd[3891]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:52.952871 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:56122.service: Deactivated successfully. Oct 9 00:58:52.955421 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 00:58:52.956406 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Oct 9 00:58:52.957891 systemd-logind[1445]: Removed session 9. Oct 9 00:58:53.083201 containerd[1456]: time="2024-10-09T00:58:53.083137365Z" level=info msg="StopPodSandbox for \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\"" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.131 [INFO][3922] k8s.go 608: Cleaning up netns ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.131 [INFO][3922] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" iface="eth0" netns="/var/run/netns/cni-392e8a88-07b8-3819-cde6-454f105ff33d" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.131 [INFO][3922] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" iface="eth0" netns="/var/run/netns/cni-392e8a88-07b8-3819-cde6-454f105ff33d" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.132 [INFO][3922] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" iface="eth0" netns="/var/run/netns/cni-392e8a88-07b8-3819-cde6-454f105ff33d" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.132 [INFO][3922] k8s.go 615: Releasing IP address(es) ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.132 [INFO][3922] utils.go 188: Calico CNI releasing IP address ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.181 [INFO][3930] ipam_plugin.go 417: Releasing address using handleID ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.182 [INFO][3930] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.182 [INFO][3930] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.188 [WARNING][3930] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.188 [INFO][3930] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.189 [INFO][3930] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:53.195647 containerd[1456]: 2024-10-09 00:58:53.192 [INFO][3922] k8s.go 621: Teardown processing complete. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:58:53.196264 containerd[1456]: time="2024-10-09T00:58:53.195861731Z" level=info msg="TearDown network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\" successfully" Oct 9 00:58:53.196264 containerd[1456]: time="2024-10-09T00:58:53.195890276Z" level=info msg="StopPodSandbox for \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\" returns successfully" Oct 9 00:58:53.196375 kubelet[2545]: E1009 00:58:53.196338 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:53.197074 containerd[1456]: time="2024-10-09T00:58:53.197035412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cf4wc,Uid:ef261157-71d2-45a1-a374-251f204d107e,Namespace:kube-system,Attempt:1,}" Oct 9 00:58:53.198444 systemd[1]: run-netns-cni\x2d392e8a88\x2d07b8\x2d3819\x2dcde6\x2d454f105ff33d.mount: Deactivated successfully. Oct 9 00:58:54.122760 systemd-networkd[1387]: cali2534ceb9d38: Link UP Oct 9 00:58:54.122978 systemd-networkd[1387]: cali2534ceb9d38: Gained carrier Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:53.965 [INFO][3940] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0 coredns-6f6b679f8f- kube-system ef261157-71d2-45a1-a374-251f204d107e 759 0 2024-10-09 00:58:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-cf4wc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2534ceb9d38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:53.965 [INFO][3940] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:53.993 [INFO][3954] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" HandleID="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.000 [INFO][3954] ipam_plugin.go 270: Auto assigning IP ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" HandleID="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003203b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-cf4wc", "timestamp":"2024-10-09 00:58:53.993821017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.001 [INFO][3954] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.001 [INFO][3954] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.001 [INFO][3954] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.003 [INFO][3954] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.100 [INFO][3954] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.104 [INFO][3954] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.106 [INFO][3954] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.107 [INFO][3954] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.107 [INFO][3954] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.109 [INFO][3954] ipam.go 1685: Creating new handle: k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7 Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.113 [INFO][3954] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.117 [INFO][3954] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.117 [INFO][3954] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" host="localhost" Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.117 [INFO][3954] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:54.141836 containerd[1456]: 2024-10-09 00:58:54.117 [INFO][3954] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" HandleID="k8s-pod-network.3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:54.143130 containerd[1456]: 2024-10-09 00:58:54.120 [INFO][3940] k8s.go 386: Populated endpoint ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ef261157-71d2-45a1-a374-251f204d107e", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-cf4wc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2534ceb9d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:54.143130 containerd[1456]: 2024-10-09 00:58:54.120 [INFO][3940] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:54.143130 containerd[1456]: 2024-10-09 00:58:54.120 [INFO][3940] dataplane_linux.go 68: Setting the host side veth name to cali2534ceb9d38 ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:54.143130 containerd[1456]: 2024-10-09 00:58:54.122 [INFO][3940] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:54.143130 containerd[1456]: 2024-10-09 00:58:54.123 [INFO][3940] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ef261157-71d2-45a1-a374-251f204d107e", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7", Pod:"coredns-6f6b679f8f-cf4wc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2534ceb9d38", MAC:"26:90:6e:48:dd:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:54.143130 containerd[1456]: 2024-10-09 00:58:54.134 [INFO][3940] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7" Namespace="kube-system" Pod="coredns-6f6b679f8f-cf4wc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:58:54.183886 containerd[1456]: time="2024-10-09T00:58:54.183489593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:54.184224 containerd[1456]: time="2024-10-09T00:58:54.184167415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:54.184224 containerd[1456]: time="2024-10-09T00:58:54.184190600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:54.184331 containerd[1456]: time="2024-10-09T00:58:54.184279683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:54.205599 systemd[1]: Started cri-containerd-3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7.scope - libcontainer container 3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7. Oct 9 00:58:54.217269 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:58:54.239205 containerd[1456]: time="2024-10-09T00:58:54.239156747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cf4wc,Uid:ef261157-71d2-45a1-a374-251f204d107e,Namespace:kube-system,Attempt:1,} returns sandbox id \"3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7\"" Oct 9 00:58:54.239842 kubelet[2545]: E1009 00:58:54.239814 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:54.241409 containerd[1456]: time="2024-10-09T00:58:54.241375966Z" level=info msg="CreateContainer within sandbox \"3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:58:54.259190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4152750756.mount: Deactivated successfully. Oct 9 00:58:54.261933 containerd[1456]: time="2024-10-09T00:58:54.261870586Z" level=info msg="CreateContainer within sandbox \"3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ace98151bf1d40e9f35ed2e4e6e50b174eb22bccd6581d9d033a29cee68e730c\"" Oct 9 00:58:54.262656 containerd[1456]: time="2024-10-09T00:58:54.262587354Z" level=info msg="StartContainer for \"ace98151bf1d40e9f35ed2e4e6e50b174eb22bccd6581d9d033a29cee68e730c\"" Oct 9 00:58:54.296611 systemd[1]: Started cri-containerd-ace98151bf1d40e9f35ed2e4e6e50b174eb22bccd6581d9d033a29cee68e730c.scope - libcontainer container ace98151bf1d40e9f35ed2e4e6e50b174eb22bccd6581d9d033a29cee68e730c. Oct 9 00:58:54.328221 containerd[1456]: time="2024-10-09T00:58:54.328164887Z" level=info msg="StartContainer for \"ace98151bf1d40e9f35ed2e4e6e50b174eb22bccd6581d9d033a29cee68e730c\" returns successfully" Oct 9 00:58:54.645053 kubelet[2545]: E1009 00:58:54.644754 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:54.667827 kubelet[2545]: I1009 00:58:54.667732 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cf4wc" podStartSLOduration=31.667707281 podStartE2EDuration="31.667707281s" podCreationTimestamp="2024-10-09 00:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:54.656838694 +0000 UTC m=+38.668910134" watchObservedRunningTime="2024-10-09 00:58:54.667707281 +0000 UTC m=+38.679778711" Oct 9 00:58:55.646799 kubelet[2545]: E1009 00:58:55.646740 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:56.076443 containerd[1456]: time="2024-10-09T00:58:56.076375381Z" level=info msg="StopPodSandbox for \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\"" Oct 9 00:58:56.141720 systemd-networkd[1387]: cali2534ceb9d38: Gained IPv6LL Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.153 [INFO][4084] k8s.go 608: Cleaning up netns ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.153 [INFO][4084] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" iface="eth0" netns="/var/run/netns/cni-25442571-ee91-14a6-c4f8-7c68d7dfb87b" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.154 [INFO][4084] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" iface="eth0" netns="/var/run/netns/cni-25442571-ee91-14a6-c4f8-7c68d7dfb87b" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.154 [INFO][4084] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" iface="eth0" netns="/var/run/netns/cni-25442571-ee91-14a6-c4f8-7c68d7dfb87b" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.154 [INFO][4084] k8s.go 615: Releasing IP address(es) ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.154 [INFO][4084] utils.go 188: Calico CNI releasing IP address ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.174 [INFO][4092] ipam_plugin.go 417: Releasing address using handleID ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.174 [INFO][4092] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.174 [INFO][4092] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.179 [WARNING][4092] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.179 [INFO][4092] ipam_plugin.go 445: Releasing address using workloadID ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.180 [INFO][4092] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:56.185263 containerd[1456]: 2024-10-09 00:58:56.182 [INFO][4084] k8s.go 621: Teardown processing complete. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:58:56.185772 containerd[1456]: time="2024-10-09T00:58:56.185496573Z" level=info msg="TearDown network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\" successfully" Oct 9 00:58:56.185772 containerd[1456]: time="2024-10-09T00:58:56.185532512Z" level=info msg="StopPodSandbox for \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\" returns successfully" Oct 9 00:58:56.186356 containerd[1456]: time="2024-10-09T00:58:56.186291549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qj2tn,Uid:c0542ea8-f247-4752-8f12-00a7cde64127,Namespace:calico-system,Attempt:1,}" Oct 9 00:58:56.189018 systemd[1]: run-netns-cni\x2d25442571\x2dee91\x2d14a6\x2dc4f8\x2d7c68d7dfb87b.mount: Deactivated successfully. Oct 9 00:58:56.306253 systemd-networkd[1387]: cali61fd745b0cf: Link UP Oct 9 00:58:56.307128 systemd-networkd[1387]: cali61fd745b0cf: Gained carrier Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.239 [INFO][4100] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qj2tn-eth0 csi-node-driver- calico-system c0542ea8-f247-4752-8f12-00a7cde64127 792 0 2024-10-09 00:58:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-qj2tn eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali61fd745b0cf [] []}} ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.239 [INFO][4100] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.267 [INFO][4111] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" HandleID="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.275 [INFO][4111] ipam_plugin.go 270: Auto assigning IP ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" HandleID="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f51a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qj2tn", "timestamp":"2024-10-09 00:58:56.267146246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.275 [INFO][4111] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.276 [INFO][4111] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.276 [INFO][4111] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.278 [INFO][4111] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.282 [INFO][4111] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.286 [INFO][4111] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.288 [INFO][4111] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.289 [INFO][4111] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.289 [INFO][4111] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.291 [INFO][4111] ipam.go 1685: Creating new handle: k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85 Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.295 [INFO][4111] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.300 [INFO][4111] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.300 [INFO][4111] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" host="localhost" Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.301 [INFO][4111] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:56.320752 containerd[1456]: 2024-10-09 00:58:56.301 [INFO][4111] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" HandleID="k8s-pod-network.5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.321579 containerd[1456]: 2024-10-09 00:58:56.303 [INFO][4100] k8s.go 386: Populated endpoint ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qj2tn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0542ea8-f247-4752-8f12-00a7cde64127", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qj2tn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali61fd745b0cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:56.321579 containerd[1456]: 2024-10-09 00:58:56.304 [INFO][4100] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.321579 containerd[1456]: 2024-10-09 00:58:56.304 [INFO][4100] dataplane_linux.go 68: Setting the host side veth name to cali61fd745b0cf ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.321579 containerd[1456]: 2024-10-09 00:58:56.307 [INFO][4100] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.321579 containerd[1456]: 2024-10-09 00:58:56.307 [INFO][4100] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qj2tn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0542ea8-f247-4752-8f12-00a7cde64127", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85", Pod:"csi-node-driver-qj2tn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali61fd745b0cf", MAC:"ea:61:f9:09:21:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:56.321579 containerd[1456]: 2024-10-09 00:58:56.317 [INFO][4100] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85" Namespace="calico-system" Pod="csi-node-driver-qj2tn" WorkloadEndpoint="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:58:56.343678 containerd[1456]: time="2024-10-09T00:58:56.343430567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:56.343678 containerd[1456]: time="2024-10-09T00:58:56.343530002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:56.343678 containerd[1456]: time="2024-10-09T00:58:56.343544400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:56.343678 containerd[1456]: time="2024-10-09T00:58:56.343645937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:56.376620 systemd[1]: Started cri-containerd-5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85.scope - libcontainer container 5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85. Oct 9 00:58:56.391679 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:58:56.407944 containerd[1456]: time="2024-10-09T00:58:56.407888912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qj2tn,Uid:c0542ea8-f247-4752-8f12-00a7cde64127,Namespace:calico-system,Attempt:1,} returns sandbox id \"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85\"" Oct 9 00:58:56.409646 containerd[1456]: time="2024-10-09T00:58:56.409609621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 00:58:56.652134 kubelet[2545]: E1009 00:58:56.652001 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:57.077133 containerd[1456]: time="2024-10-09T00:58:57.077077732Z" level=info msg="StopPodSandbox for \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\"" Oct 9 00:58:57.077598 containerd[1456]: time="2024-10-09T00:58:57.077191073Z" level=info msg="StopPodSandbox for \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\"" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.122 [INFO][4204] k8s.go 608: Cleaning up netns ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.122 [INFO][4204] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" iface="eth0" netns="/var/run/netns/cni-b9851865-c306-75af-ae53-e76df9feacc6" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.124 [INFO][4204] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" iface="eth0" netns="/var/run/netns/cni-b9851865-c306-75af-ae53-e76df9feacc6" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.124 [INFO][4204] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" iface="eth0" netns="/var/run/netns/cni-b9851865-c306-75af-ae53-e76df9feacc6" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.125 [INFO][4204] k8s.go 615: Releasing IP address(es) ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.125 [INFO][4204] utils.go 188: Calico CNI releasing IP address ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.160 [INFO][4220] ipam_plugin.go 417: Releasing address using handleID ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.160 [INFO][4220] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.160 [INFO][4220] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.166 [WARNING][4220] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.166 [INFO][4220] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.168 [INFO][4220] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:57.173174 containerd[1456]: 2024-10-09 00:58:57.171 [INFO][4204] k8s.go 621: Teardown processing complete. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:58:57.177613 containerd[1456]: time="2024-10-09T00:58:57.177562274Z" level=info msg="TearDown network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\" successfully" Oct 9 00:58:57.177613 containerd[1456]: time="2024-10-09T00:58:57.177600138Z" level=info msg="StopPodSandbox for \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\" returns successfully" Oct 9 00:58:57.178069 kubelet[2545]: E1009 00:58:57.178035 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:57.178643 containerd[1456]: time="2024-10-09T00:58:57.178615813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4dwfj,Uid:7d24239d-9728-43ad-8e3b-4764e51d2b6a,Namespace:kube-system,Attempt:1,}" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.130 [INFO][4205] k8s.go 608: Cleaning up netns ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.131 [INFO][4205] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" iface="eth0" netns="/var/run/netns/cni-0dc5eddf-46b3-9a4a-ff3f-273e8c839b1e" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.131 [INFO][4205] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" iface="eth0" netns="/var/run/netns/cni-0dc5eddf-46b3-9a4a-ff3f-273e8c839b1e" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.132 [INFO][4205] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" iface="eth0" netns="/var/run/netns/cni-0dc5eddf-46b3-9a4a-ff3f-273e8c839b1e" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.132 [INFO][4205] k8s.go 615: Releasing IP address(es) ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.132 [INFO][4205] utils.go 188: Calico CNI releasing IP address ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.164 [INFO][4225] ipam_plugin.go 417: Releasing address using handleID ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.164 [INFO][4225] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.168 [INFO][4225] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.174 [WARNING][4225] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.174 [INFO][4225] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.175 [INFO][4225] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:57.181318 containerd[1456]: 2024-10-09 00:58:57.178 [INFO][4205] k8s.go 621: Teardown processing complete. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:58:57.181748 containerd[1456]: time="2024-10-09T00:58:57.181458690Z" level=info msg="TearDown network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\" successfully" Oct 9 00:58:57.181748 containerd[1456]: time="2024-10-09T00:58:57.181504380Z" level=info msg="StopPodSandbox for \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\" returns successfully" Oct 9 00:58:57.182350 containerd[1456]: time="2024-10-09T00:58:57.182308503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584f4895f9-dcjnl,Uid:1d965255-e379-49fc-868c-682e1ba70749,Namespace:calico-system,Attempt:1,}" Oct 9 00:58:57.188751 systemd[1]: run-netns-cni\x2d0dc5eddf\x2d46b3\x2d9a4a\x2dff3f\x2d273e8c839b1e.mount: Deactivated successfully. Oct 9 00:58:57.188906 systemd[1]: run-netns-cni\x2db9851865\x2dc306\x2d75af\x2dae53\x2de76df9feacc6.mount: Deactivated successfully. Oct 9 00:58:57.309907 systemd-networkd[1387]: cali1b0623989b2: Link UP Oct 9 00:58:57.310105 systemd-networkd[1387]: cali1b0623989b2: Gained carrier Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.242 [INFO][4236] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0 coredns-6f6b679f8f- kube-system 7d24239d-9728-43ad-8e3b-4764e51d2b6a 801 0 2024-10-09 00:58:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-4dwfj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1b0623989b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.243 [INFO][4236] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.272 [INFO][4264] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" HandleID="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.281 [INFO][4264] ipam_plugin.go 270: Auto assigning IP ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" HandleID="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc1c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-4dwfj", "timestamp":"2024-10-09 00:58:57.272900159 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.281 [INFO][4264] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.281 [INFO][4264] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.281 [INFO][4264] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.283 [INFO][4264] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.286 [INFO][4264] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.291 [INFO][4264] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.292 [INFO][4264] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.294 [INFO][4264] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.294 [INFO][4264] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.296 [INFO][4264] ipam.go 1685: Creating new handle: k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46 Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.300 [INFO][4264] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.305 [INFO][4264] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.305 [INFO][4264] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" host="localhost" Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.305 [INFO][4264] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:57.322596 containerd[1456]: 2024-10-09 00:58:57.305 [INFO][4264] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" HandleID="k8s-pod-network.d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.323298 containerd[1456]: 2024-10-09 00:58:57.307 [INFO][4236] k8s.go 386: Populated endpoint ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7d24239d-9728-43ad-8e3b-4764e51d2b6a", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-4dwfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b0623989b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:57.323298 containerd[1456]: 2024-10-09 00:58:57.308 [INFO][4236] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.323298 containerd[1456]: 2024-10-09 00:58:57.308 [INFO][4236] dataplane_linux.go 68: Setting the host side veth name to cali1b0623989b2 ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.323298 containerd[1456]: 2024-10-09 00:58:57.310 [INFO][4236] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.323298 containerd[1456]: 2024-10-09 00:58:57.310 [INFO][4236] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7d24239d-9728-43ad-8e3b-4764e51d2b6a", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46", Pod:"coredns-6f6b679f8f-4dwfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b0623989b2", MAC:"02:98:c2:27:d5:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:57.323298 containerd[1456]: 2024-10-09 00:58:57.319 [INFO][4236] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46" Namespace="kube-system" Pod="coredns-6f6b679f8f-4dwfj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:58:57.352765 containerd[1456]: time="2024-10-09T00:58:57.352027685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:57.352765 containerd[1456]: time="2024-10-09T00:58:57.352112479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:57.352765 containerd[1456]: time="2024-10-09T00:58:57.352137918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:57.352765 containerd[1456]: time="2024-10-09T00:58:57.352372454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:57.378662 systemd[1]: Started cri-containerd-d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46.scope - libcontainer container d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46. Oct 9 00:58:57.393550 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:58:57.419920 systemd-networkd[1387]: cali1ad83cb20f1: Link UP Oct 9 00:58:57.420914 systemd-networkd[1387]: cali1ad83cb20f1: Gained carrier Oct 9 00:58:57.424682 containerd[1456]: time="2024-10-09T00:58:57.424370728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4dwfj,Uid:7d24239d-9728-43ad-8e3b-4764e51d2b6a,Namespace:kube-system,Attempt:1,} returns sandbox id \"d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46\"" Oct 9 00:58:57.425310 kubelet[2545]: E1009 00:58:57.425269 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:57.433072 containerd[1456]: time="2024-10-09T00:58:57.432163931Z" level=info msg="CreateContainer within sandbox \"d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.244 [INFO][4244] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0 calico-kube-controllers-584f4895f9- calico-system 1d965255-e379-49fc-868c-682e1ba70749 803 0 2024-10-09 00:58:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:584f4895f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-584f4895f9-dcjnl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1ad83cb20f1 [] []}} ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.244 [INFO][4244] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.277 [INFO][4265] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" HandleID="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.286 [INFO][4265] ipam_plugin.go 270: Auto assigning IP ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" HandleID="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-584f4895f9-dcjnl", "timestamp":"2024-10-09 00:58:57.27775557 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.286 [INFO][4265] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.305 [INFO][4265] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.305 [INFO][4265] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.384 [INFO][4265] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.391 [INFO][4265] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.395 [INFO][4265] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.396 [INFO][4265] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.398 [INFO][4265] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.398 [INFO][4265] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.399 [INFO][4265] ipam.go 1685: Creating new handle: k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.404 [INFO][4265] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.408 [INFO][4265] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.408 [INFO][4265] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" host="localhost" Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.408 [INFO][4265] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:58:57.440732 containerd[1456]: 2024-10-09 00:58:57.409 [INFO][4265] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" HandleID="k8s-pod-network.f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.441343 containerd[1456]: 2024-10-09 00:58:57.416 [INFO][4244] k8s.go 386: Populated endpoint ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0", GenerateName:"calico-kube-controllers-584f4895f9-", Namespace:"calico-system", SelfLink:"", UID:"1d965255-e379-49fc-868c-682e1ba70749", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584f4895f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-584f4895f9-dcjnl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad83cb20f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:57.441343 containerd[1456]: 2024-10-09 00:58:57.417 [INFO][4244] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.441343 containerd[1456]: 2024-10-09 00:58:57.417 [INFO][4244] dataplane_linux.go 68: Setting the host side veth name to cali1ad83cb20f1 ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.441343 containerd[1456]: 2024-10-09 00:58:57.421 [INFO][4244] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.441343 containerd[1456]: 2024-10-09 00:58:57.421 [INFO][4244] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0", GenerateName:"calico-kube-controllers-584f4895f9-", Namespace:"calico-system", SelfLink:"", UID:"1d965255-e379-49fc-868c-682e1ba70749", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584f4895f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce", Pod:"calico-kube-controllers-584f4895f9-dcjnl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad83cb20f1", MAC:"86:20:01:69:45:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:58:57.441343 containerd[1456]: 2024-10-09 00:58:57.436 [INFO][4244] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce" Namespace="calico-system" Pod="calico-kube-controllers-584f4895f9-dcjnl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:58:57.457326 containerd[1456]: time="2024-10-09T00:58:57.457260770Z" level=info msg="CreateContainer within sandbox \"d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a459eb922bbcbc188d2e72f5c4066e2c155dfbdb15bacbf2b6adcab41533cc9\"" Oct 9 00:58:57.458436 containerd[1456]: time="2024-10-09T00:58:57.458338414Z" level=info msg="StartContainer for \"3a459eb922bbcbc188d2e72f5c4066e2c155dfbdb15bacbf2b6adcab41533cc9\"" Oct 9 00:58:57.472010 containerd[1456]: time="2024-10-09T00:58:57.471529664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:57.472010 containerd[1456]: time="2024-10-09T00:58:57.471647122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:57.472010 containerd[1456]: time="2024-10-09T00:58:57.471675257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:57.472010 containerd[1456]: time="2024-10-09T00:58:57.471809358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:57.490598 systemd[1]: Started cri-containerd-3a459eb922bbcbc188d2e72f5c4066e2c155dfbdb15bacbf2b6adcab41533cc9.scope - libcontainer container 3a459eb922bbcbc188d2e72f5c4066e2c155dfbdb15bacbf2b6adcab41533cc9. Oct 9 00:58:57.500702 systemd[1]: Started cri-containerd-f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce.scope - libcontainer container f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce. Oct 9 00:58:57.515772 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:58:57.528990 containerd[1456]: time="2024-10-09T00:58:57.528910625Z" level=info msg="StartContainer for \"3a459eb922bbcbc188d2e72f5c4066e2c155dfbdb15bacbf2b6adcab41533cc9\" returns successfully" Oct 9 00:58:57.563692 containerd[1456]: time="2024-10-09T00:58:57.562782345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-584f4895f9-dcjnl,Uid:1d965255-e379-49fc-868c-682e1ba70749,Namespace:calico-system,Attempt:1,} returns sandbox id \"f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce\"" Oct 9 00:58:57.658609 kubelet[2545]: E1009 00:58:57.657387 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:57.668319 kubelet[2545]: I1009 00:58:57.667365 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4dwfj" podStartSLOduration=34.6673471 podStartE2EDuration="34.6673471s" podCreationTimestamp="2024-10-09 00:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:57.666955739 +0000 UTC m=+41.679027159" watchObservedRunningTime="2024-10-09 00:58:57.6673471 +0000 UTC m=+41.679418530" Oct 9 00:58:57.869715 systemd-networkd[1387]: cali61fd745b0cf: Gained IPv6LL Oct 9 00:58:57.971448 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:39010.service - OpenSSH per-connection server daemon (10.0.0.1:39010). Oct 9 00:58:58.036565 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 39010 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:58:58.039315 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:58.047489 systemd-logind[1445]: New session 10 of user core. Oct 9 00:58:58.052796 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 00:58:58.126600 containerd[1456]: time="2024-10-09T00:58:58.126524675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:58.127752 containerd[1456]: time="2024-10-09T00:58:58.127542812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 00:58:58.130387 containerd[1456]: time="2024-10-09T00:58:58.128929987Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:58.132384 containerd[1456]: time="2024-10-09T00:58:58.131287035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:58.132384 containerd[1456]: time="2024-10-09T00:58:58.132189709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.72254015s" Oct 9 00:58:58.132384 containerd[1456]: time="2024-10-09T00:58:58.132213706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 00:58:58.133506 containerd[1456]: time="2024-10-09T00:58:58.133456159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 00:58:58.134504 containerd[1456]: time="2024-10-09T00:58:58.134456473Z" level=info msg="CreateContainer within sandbox \"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 00:58:58.161153 containerd[1456]: time="2024-10-09T00:58:58.161079499Z" level=info msg="CreateContainer within sandbox \"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b8c2c0c3c4b7255730ed221c1fbe920438ebf58be3d0c407dd1bc73bb2a79aae\"" Oct 9 00:58:58.162069 containerd[1456]: time="2024-10-09T00:58:58.162038352Z" level=info msg="StartContainer for \"b8c2c0c3c4b7255730ed221c1fbe920438ebf58be3d0c407dd1bc73bb2a79aae\"" Oct 9 00:58:58.205681 sshd[4434]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:58.205759 systemd[1]: Started cri-containerd-b8c2c0c3c4b7255730ed221c1fbe920438ebf58be3d0c407dd1bc73bb2a79aae.scope - libcontainer container b8c2c0c3c4b7255730ed221c1fbe920438ebf58be3d0c407dd1bc73bb2a79aae. Oct 9 00:58:58.212582 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:39010.service: Deactivated successfully. Oct 9 00:58:58.216418 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 00:58:58.217630 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Oct 9 00:58:58.218793 systemd-logind[1445]: Removed session 10. Oct 9 00:58:58.351243 containerd[1456]: time="2024-10-09T00:58:58.351180012Z" level=info msg="StartContainer for \"b8c2c0c3c4b7255730ed221c1fbe920438ebf58be3d0c407dd1bc73bb2a79aae\" returns successfully" Oct 9 00:58:58.638661 systemd-networkd[1387]: cali1b0623989b2: Gained IPv6LL Oct 9 00:58:58.661285 kubelet[2545]: E1009 00:58:58.661245 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:59.469647 systemd-networkd[1387]: cali1ad83cb20f1: Gained IPv6LL Oct 9 00:58:59.663495 kubelet[2545]: E1009 00:58:59.663329 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:00.354144 containerd[1456]: time="2024-10-09T00:59:00.354077892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:00.354876 containerd[1456]: time="2024-10-09T00:59:00.354776887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 00:59:00.355981 containerd[1456]: time="2024-10-09T00:59:00.355938239Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:00.357990 containerd[1456]: time="2024-10-09T00:59:00.357956693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:00.358607 containerd[1456]: time="2024-10-09T00:59:00.358558470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.225047194s" Oct 9 00:59:00.358644 containerd[1456]: time="2024-10-09T00:59:00.358607665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 00:59:00.359736 containerd[1456]: time="2024-10-09T00:59:00.359701697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 00:59:00.369020 containerd[1456]: time="2024-10-09T00:59:00.368824717Z" level=info msg="CreateContainer within sandbox \"f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 00:59:00.384782 containerd[1456]: time="2024-10-09T00:59:00.384728867Z" level=info msg="CreateContainer within sandbox \"f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5f5eb954db4666565140751207d9000e37f6bbc6e4787cb87f570a0767dab192\"" Oct 9 00:59:00.386557 containerd[1456]: time="2024-10-09T00:59:00.385488810Z" level=info msg="StartContainer for \"5f5eb954db4666565140751207d9000e37f6bbc6e4787cb87f570a0767dab192\"" Oct 9 00:59:00.426704 systemd[1]: Started cri-containerd-5f5eb954db4666565140751207d9000e37f6bbc6e4787cb87f570a0767dab192.scope - libcontainer container 5f5eb954db4666565140751207d9000e37f6bbc6e4787cb87f570a0767dab192. Oct 9 00:59:00.473420 containerd[1456]: time="2024-10-09T00:59:00.473348916Z" level=info msg="StartContainer for \"5f5eb954db4666565140751207d9000e37f6bbc6e4787cb87f570a0767dab192\" returns successfully" Oct 9 00:59:00.679020 kubelet[2545]: I1009 00:59:00.678860 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-584f4895f9-dcjnl" podStartSLOduration=28.884367854 podStartE2EDuration="31.678838194s" podCreationTimestamp="2024-10-09 00:58:29 +0000 UTC" firstStartedPulling="2024-10-09 00:58:57.565030717 +0000 UTC m=+41.577102137" lastFinishedPulling="2024-10-09 00:59:00.359501047 +0000 UTC m=+44.371572477" observedRunningTime="2024-10-09 00:59:00.678284711 +0000 UTC m=+44.690356141" watchObservedRunningTime="2024-10-09 00:59:00.678838194 +0000 UTC m=+44.690909624" Oct 9 00:59:01.734524 containerd[1456]: time="2024-10-09T00:59:01.734448315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:01.735534 containerd[1456]: time="2024-10-09T00:59:01.735483079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 00:59:01.736620 containerd[1456]: time="2024-10-09T00:59:01.736596416Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:01.738987 containerd[1456]: time="2024-10-09T00:59:01.738944966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:01.739629 containerd[1456]: time="2024-10-09T00:59:01.739587882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.37985255s" Oct 9 00:59:01.739694 containerd[1456]: time="2024-10-09T00:59:01.739632729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 00:59:01.741917 containerd[1456]: time="2024-10-09T00:59:01.741894120Z" level=info msg="CreateContainer within sandbox \"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 00:59:01.759131 containerd[1456]: time="2024-10-09T00:59:01.759090656Z" level=info msg="CreateContainer within sandbox \"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5a9845691619a4a654dbabfca716328e697bc6b533dd1e03ab3d34767239fbc2\"" Oct 9 00:59:01.759673 containerd[1456]: time="2024-10-09T00:59:01.759648157Z" level=info msg="StartContainer for \"5a9845691619a4a654dbabfca716328e697bc6b533dd1e03ab3d34767239fbc2\"" Oct 9 00:59:01.806936 systemd[1]: Started cri-containerd-5a9845691619a4a654dbabfca716328e697bc6b533dd1e03ab3d34767239fbc2.scope - libcontainer container 5a9845691619a4a654dbabfca716328e697bc6b533dd1e03ab3d34767239fbc2. Oct 9 00:59:01.869516 containerd[1456]: time="2024-10-09T00:59:01.869376365Z" level=info msg="StartContainer for \"5a9845691619a4a654dbabfca716328e697bc6b533dd1e03ab3d34767239fbc2\" returns successfully" Oct 9 00:59:02.146000 kubelet[2545]: I1009 00:59:02.145955 2545 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 00:59:02.146000 kubelet[2545]: I1009 00:59:02.146006 2545 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 00:59:02.684218 kubelet[2545]: I1009 00:59:02.684136 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qj2tn" podStartSLOduration=28.352981689 podStartE2EDuration="33.684104941s" podCreationTimestamp="2024-10-09 00:58:29 +0000 UTC" firstStartedPulling="2024-10-09 00:58:56.409256554 +0000 UTC m=+40.421327984" lastFinishedPulling="2024-10-09 00:59:01.740379806 +0000 UTC m=+45.752451236" observedRunningTime="2024-10-09 00:59:02.683124222 +0000 UTC m=+46.695195653" watchObservedRunningTime="2024-10-09 00:59:02.684104941 +0000 UTC m=+46.696176371" Oct 9 00:59:03.218256 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:39014.service - OpenSSH per-connection server daemon (10.0.0.1:39014). Oct 9 00:59:03.266273 sshd[4607]: Accepted publickey for core from 10.0.0.1 port 39014 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:03.268233 sshd[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:03.272930 systemd-logind[1445]: New session 11 of user core. Oct 9 00:59:03.280671 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 00:59:03.426817 sshd[4607]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:03.440615 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:39014.service: Deactivated successfully. Oct 9 00:59:03.442697 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 00:59:03.444488 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Oct 9 00:59:03.449809 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:39020.service - OpenSSH per-connection server daemon (10.0.0.1:39020). Oct 9 00:59:03.450791 systemd-logind[1445]: Removed session 11. Oct 9 00:59:03.485292 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 39020 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:03.487134 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:03.491224 systemd-logind[1445]: New session 12 of user core. Oct 9 00:59:03.500612 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 00:59:03.649561 sshd[4623]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:03.660020 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:39020.service: Deactivated successfully. Oct 9 00:59:03.663224 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 00:59:03.666396 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Oct 9 00:59:03.675817 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:39022.service - OpenSSH per-connection server daemon (10.0.0.1:39022). Oct 9 00:59:03.677974 systemd-logind[1445]: Removed session 12. Oct 9 00:59:03.714948 sshd[4635]: Accepted publickey for core from 10.0.0.1 port 39022 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:03.717326 sshd[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:03.722700 systemd-logind[1445]: New session 13 of user core. Oct 9 00:59:03.727709 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 00:59:03.846070 sshd[4635]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:03.850634 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:39022.service: Deactivated successfully. Oct 9 00:59:03.852697 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 00:59:03.853313 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Oct 9 00:59:03.854416 systemd-logind[1445]: Removed session 13. Oct 9 00:59:08.859427 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:56594.service - OpenSSH per-connection server daemon (10.0.0.1:56594). Oct 9 00:59:08.899012 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 56594 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:08.901040 sshd[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:08.905250 systemd-logind[1445]: New session 14 of user core. Oct 9 00:59:08.915610 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 00:59:09.033789 sshd[4658]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:09.038768 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:56594.service: Deactivated successfully. Oct 9 00:59:09.041243 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 00:59:09.041900 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Oct 9 00:59:09.042894 systemd-logind[1445]: Removed session 14. Oct 9 00:59:14.044549 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:56600.service - OpenSSH per-connection server daemon (10.0.0.1:56600). Oct 9 00:59:14.096205 sshd[4685]: Accepted publickey for core from 10.0.0.1 port 56600 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:14.097943 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:14.101663 systemd-logind[1445]: New session 15 of user core. Oct 9 00:59:14.112595 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 00:59:14.226726 sshd[4685]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:14.229737 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:56600.service: Deactivated successfully. Oct 9 00:59:14.232316 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 00:59:14.234098 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Oct 9 00:59:14.235160 systemd-logind[1445]: Removed session 15. Oct 9 00:59:14.771582 kubelet[2545]: E1009 00:59:14.771545 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:16.063396 containerd[1456]: time="2024-10-09T00:59:16.063349198Z" level=info msg="StopPodSandbox for \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\"" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.095 [WARNING][4734] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qj2tn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0542ea8-f247-4752-8f12-00a7cde64127", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85", Pod:"csi-node-driver-qj2tn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali61fd745b0cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.095 [INFO][4734] k8s.go 608: Cleaning up netns ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.095 [INFO][4734] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" iface="eth0" netns="" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.095 [INFO][4734] k8s.go 615: Releasing IP address(es) ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.095 [INFO][4734] utils.go 188: Calico CNI releasing IP address ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.117 [INFO][4743] ipam_plugin.go 417: Releasing address using handleID ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.117 [INFO][4743] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.118 [INFO][4743] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.122 [WARNING][4743] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.122 [INFO][4743] ipam_plugin.go 445: Releasing address using workloadID ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.123 [INFO][4743] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.129530 containerd[1456]: 2024-10-09 00:59:16.126 [INFO][4734] k8s.go 621: Teardown processing complete. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.130179 containerd[1456]: time="2024-10-09T00:59:16.130130563Z" level=info msg="TearDown network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\" successfully" Oct 9 00:59:16.130179 containerd[1456]: time="2024-10-09T00:59:16.130158447Z" level=info msg="StopPodSandbox for \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\" returns successfully" Oct 9 00:59:16.135239 containerd[1456]: time="2024-10-09T00:59:16.135205025Z" level=info msg="RemovePodSandbox for \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\"" Oct 9 00:59:16.147997 containerd[1456]: time="2024-10-09T00:59:16.147958704Z" level=info msg="Forcibly stopping sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\"" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.178 [WARNING][4765] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qj2tn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0542ea8-f247-4752-8f12-00a7cde64127", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5695f0edf4eee03f014346319c6f79509a1f58af118057527cc231d470750e85", Pod:"csi-node-driver-qj2tn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali61fd745b0cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.179 [INFO][4765] k8s.go 608: Cleaning up netns ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.179 [INFO][4765] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" iface="eth0" netns="" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.179 [INFO][4765] k8s.go 615: Releasing IP address(es) ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.179 [INFO][4765] utils.go 188: Calico CNI releasing IP address ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.197 [INFO][4772] ipam_plugin.go 417: Releasing address using handleID ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.197 [INFO][4772] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.197 [INFO][4772] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.201 [WARNING][4772] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.201 [INFO][4772] ipam_plugin.go 445: Releasing address using workloadID ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" HandleID="k8s-pod-network.20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Workload="localhost-k8s-csi--node--driver--qj2tn-eth0" Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.202 [INFO][4772] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.207739 containerd[1456]: 2024-10-09 00:59:16.205 [INFO][4765] k8s.go 621: Teardown processing complete. ContainerID="20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3" Oct 9 00:59:16.208176 containerd[1456]: time="2024-10-09T00:59:16.207777968Z" level=info msg="TearDown network for sandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\" successfully" Oct 9 00:59:16.472435 containerd[1456]: time="2024-10-09T00:59:16.472324655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:59:16.472435 containerd[1456]: time="2024-10-09T00:59:16.472399899Z" level=info msg="RemovePodSandbox \"20593275c463735ed453492eae49b5cef204af83a64e5f3eaac14236090128e3\" returns successfully" Oct 9 00:59:16.473135 containerd[1456]: time="2024-10-09T00:59:16.473102377Z" level=info msg="StopPodSandbox for \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\"" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.503 [WARNING][4794] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7d24239d-9728-43ad-8e3b-4764e51d2b6a", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46", Pod:"coredns-6f6b679f8f-4dwfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b0623989b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.503 [INFO][4794] k8s.go 608: Cleaning up netns ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.503 [INFO][4794] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" iface="eth0" netns="" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.503 [INFO][4794] k8s.go 615: Releasing IP address(es) ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.503 [INFO][4794] utils.go 188: Calico CNI releasing IP address ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.521 [INFO][4802] ipam_plugin.go 417: Releasing address using handleID ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.521 [INFO][4802] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.521 [INFO][4802] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.526 [WARNING][4802] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.526 [INFO][4802] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.528 [INFO][4802] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.532332 containerd[1456]: 2024-10-09 00:59:16.530 [INFO][4794] k8s.go 621: Teardown processing complete. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.532804 containerd[1456]: time="2024-10-09T00:59:16.532357277Z" level=info msg="TearDown network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\" successfully" Oct 9 00:59:16.532804 containerd[1456]: time="2024-10-09T00:59:16.532381383Z" level=info msg="StopPodSandbox for \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\" returns successfully" Oct 9 00:59:16.532900 containerd[1456]: time="2024-10-09T00:59:16.532873477Z" level=info msg="RemovePodSandbox for \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\"" Oct 9 00:59:16.532941 containerd[1456]: time="2024-10-09T00:59:16.532903325Z" level=info msg="Forcibly stopping sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\"" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.565 [WARNING][4826] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7d24239d-9728-43ad-8e3b-4764e51d2b6a", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1338b113db702031618bc7c778ffde590d9086689a83345908731326881fb46", Pod:"coredns-6f6b679f8f-4dwfj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1b0623989b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.565 [INFO][4826] k8s.go 608: Cleaning up netns ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.565 [INFO][4826] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" iface="eth0" netns="" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.565 [INFO][4826] k8s.go 615: Releasing IP address(es) ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.565 [INFO][4826] utils.go 188: Calico CNI releasing IP address ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.586 [INFO][4834] ipam_plugin.go 417: Releasing address using handleID ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.586 [INFO][4834] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.586 [INFO][4834] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.591 [WARNING][4834] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.591 [INFO][4834] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" HandleID="k8s-pod-network.c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Workload="localhost-k8s-coredns--6f6b679f8f--4dwfj-eth0" Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.593 [INFO][4834] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.598688 containerd[1456]: 2024-10-09 00:59:16.595 [INFO][4826] k8s.go 621: Teardown processing complete. ContainerID="c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135" Oct 9 00:59:16.599146 containerd[1456]: time="2024-10-09T00:59:16.598735770Z" level=info msg="TearDown network for sandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\" successfully" Oct 9 00:59:16.602382 containerd[1456]: time="2024-10-09T00:59:16.602352296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:59:16.602528 containerd[1456]: time="2024-10-09T00:59:16.602396781Z" level=info msg="RemovePodSandbox \"c1a812dae5fa0d63593ad9adde96bbc9712b821707f189ac881d45603b9c2135\" returns successfully" Oct 9 00:59:16.602932 containerd[1456]: time="2024-10-09T00:59:16.602908343Z" level=info msg="StopPodSandbox for \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\"" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.637 [WARNING][4858] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0", GenerateName:"calico-kube-controllers-584f4895f9-", Namespace:"calico-system", SelfLink:"", UID:"1d965255-e379-49fc-868c-682e1ba70749", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584f4895f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce", Pod:"calico-kube-controllers-584f4895f9-dcjnl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad83cb20f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.637 [INFO][4858] k8s.go 608: Cleaning up netns ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.637 [INFO][4858] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" iface="eth0" netns="" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.637 [INFO][4858] k8s.go 615: Releasing IP address(es) ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.637 [INFO][4858] utils.go 188: Calico CNI releasing IP address ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.659 [INFO][4866] ipam_plugin.go 417: Releasing address using handleID ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.659 [INFO][4866] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.659 [INFO][4866] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.663 [WARNING][4866] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.663 [INFO][4866] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.665 [INFO][4866] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.669804 containerd[1456]: 2024-10-09 00:59:16.667 [INFO][4858] k8s.go 621: Teardown processing complete. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.670371 containerd[1456]: time="2024-10-09T00:59:16.670318655Z" level=info msg="TearDown network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\" successfully" Oct 9 00:59:16.670371 containerd[1456]: time="2024-10-09T00:59:16.670356156Z" level=info msg="StopPodSandbox for \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\" returns successfully" Oct 9 00:59:16.670974 containerd[1456]: time="2024-10-09T00:59:16.670934496Z" level=info msg="RemovePodSandbox for \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\"" Oct 9 00:59:16.671048 containerd[1456]: time="2024-10-09T00:59:16.670991965Z" level=info msg="Forcibly stopping sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\"" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.708 [WARNING][4888] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0", GenerateName:"calico-kube-controllers-584f4895f9-", Namespace:"calico-system", SelfLink:"", UID:"1d965255-e379-49fc-868c-682e1ba70749", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"584f4895f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8d1d929e4b49d7f2e30721499d2bf263784e5a9b8b5d097aafcf9f8602fe8ce", Pod:"calico-kube-controllers-584f4895f9-dcjnl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ad83cb20f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.708 [INFO][4888] k8s.go 608: Cleaning up netns ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.708 [INFO][4888] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" iface="eth0" netns="" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.708 [INFO][4888] k8s.go 615: Releasing IP address(es) ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.708 [INFO][4888] utils.go 188: Calico CNI releasing IP address ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.728 [INFO][4896] ipam_plugin.go 417: Releasing address using handleID ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.728 [INFO][4896] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.728 [INFO][4896] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.733 [WARNING][4896] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.733 [INFO][4896] ipam_plugin.go 445: Releasing address using workloadID ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" HandleID="k8s-pod-network.84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Workload="localhost-k8s-calico--kube--controllers--584f4895f9--dcjnl-eth0" Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.735 [INFO][4896] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.740132 containerd[1456]: 2024-10-09 00:59:16.737 [INFO][4888] k8s.go 621: Teardown processing complete. ContainerID="84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb" Oct 9 00:59:16.740132 containerd[1456]: time="2024-10-09T00:59:16.740090285Z" level=info msg="TearDown network for sandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\" successfully" Oct 9 00:59:16.743743 containerd[1456]: time="2024-10-09T00:59:16.743712151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:59:16.743816 containerd[1456]: time="2024-10-09T00:59:16.743763920Z" level=info msg="RemovePodSandbox \"84d31cb8ec195149034c3c08f23c39bc0cb456a62b1e7d041df9c23f747f21fb\" returns successfully" Oct 9 00:59:16.744327 containerd[1456]: time="2024-10-09T00:59:16.744276153Z" level=info msg="StopPodSandbox for \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\"" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.778 [WARNING][4919] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ef261157-71d2-45a1-a374-251f204d107e", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7", Pod:"coredns-6f6b679f8f-cf4wc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2534ceb9d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.779 [INFO][4919] k8s.go 608: Cleaning up netns ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.779 [INFO][4919] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" iface="eth0" netns="" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.779 [INFO][4919] k8s.go 615: Releasing IP address(es) ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.779 [INFO][4919] utils.go 188: Calico CNI releasing IP address ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.797 [INFO][4926] ipam_plugin.go 417: Releasing address using handleID ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.797 [INFO][4926] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.797 [INFO][4926] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.802 [WARNING][4926] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.802 [INFO][4926] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.804 [INFO][4926] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.808998 containerd[1456]: 2024-10-09 00:59:16.806 [INFO][4919] k8s.go 621: Teardown processing complete. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.809430 containerd[1456]: time="2024-10-09T00:59:16.809047262Z" level=info msg="TearDown network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\" successfully" Oct 9 00:59:16.809430 containerd[1456]: time="2024-10-09T00:59:16.809072472Z" level=info msg="StopPodSandbox for \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\" returns successfully" Oct 9 00:59:16.809604 containerd[1456]: time="2024-10-09T00:59:16.809568222Z" level=info msg="RemovePodSandbox for \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\"" Oct 9 00:59:16.809604 containerd[1456]: time="2024-10-09T00:59:16.809600764Z" level=info msg="Forcibly stopping sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\"" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.844 [WARNING][4949] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ef261157-71d2-45a1-a374-251f204d107e", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d977fffab39c03f176050981926636aa383c2964506898dc2010108095d33a7", Pod:"coredns-6f6b679f8f-cf4wc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2534ceb9d38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.844 [INFO][4949] k8s.go 608: Cleaning up netns ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.844 [INFO][4949] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" iface="eth0" netns="" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.844 [INFO][4949] k8s.go 615: Releasing IP address(es) ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.844 [INFO][4949] utils.go 188: Calico CNI releasing IP address ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.863 [INFO][4956] ipam_plugin.go 417: Releasing address using handleID ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.863 [INFO][4956] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.863 [INFO][4956] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.868 [WARNING][4956] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.869 [INFO][4956] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" HandleID="k8s-pod-network.6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Workload="localhost-k8s-coredns--6f6b679f8f--cf4wc-eth0" Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.870 [INFO][4956] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:16.875217 containerd[1456]: 2024-10-09 00:59:16.872 [INFO][4949] k8s.go 621: Teardown processing complete. ContainerID="6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9" Oct 9 00:59:16.875753 containerd[1456]: time="2024-10-09T00:59:16.875255579Z" level=info msg="TearDown network for sandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\" successfully" Oct 9 00:59:16.879599 containerd[1456]: time="2024-10-09T00:59:16.879564473Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:59:16.879664 containerd[1456]: time="2024-10-09T00:59:16.879622495Z" level=info msg="RemovePodSandbox \"6e4e0647739bb03643c9c6ded51ad89a35a13593c859f4b494ebcbf364c247e9\" returns successfully" Oct 9 00:59:19.237428 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:59056.service - OpenSSH per-connection server daemon (10.0.0.1:59056). Oct 9 00:59:19.295109 sshd[4988]: Accepted publickey for core from 10.0.0.1 port 59056 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:19.296947 sshd[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:19.301103 systemd-logind[1445]: New session 16 of user core. Oct 9 00:59:19.322613 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 00:59:19.445306 sshd[4988]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:19.449632 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:59056.service: Deactivated successfully. Oct 9 00:59:19.452101 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 00:59:19.452864 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Oct 9 00:59:19.453799 systemd-logind[1445]: Removed session 16. Oct 9 00:59:23.780033 systemd[1]: Created slice kubepods-besteffort-pod4668ea4f_fe79_4861_a284_6402290192b8.slice - libcontainer container kubepods-besteffort-pod4668ea4f_fe79_4861_a284_6402290192b8.slice. Oct 9 00:59:23.856051 kubelet[2545]: I1009 00:59:23.855989 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4668ea4f-fe79-4861-a284-6402290192b8-calico-apiserver-certs\") pod \"calico-apiserver-684f789b48-stzg7\" (UID: \"4668ea4f-fe79-4861-a284-6402290192b8\") " pod="calico-apiserver/calico-apiserver-684f789b48-stzg7" Oct 9 00:59:23.856051 kubelet[2545]: I1009 00:59:23.856036 2545 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxqv6\" (UniqueName: \"kubernetes.io/projected/4668ea4f-fe79-4861-a284-6402290192b8-kube-api-access-fxqv6\") pod \"calico-apiserver-684f789b48-stzg7\" (UID: \"4668ea4f-fe79-4861-a284-6402290192b8\") " pod="calico-apiserver/calico-apiserver-684f789b48-stzg7" Oct 9 00:59:23.956552 kubelet[2545]: E1009 00:59:23.956461 2545 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 00:59:23.956739 kubelet[2545]: E1009 00:59:23.956646 2545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4668ea4f-fe79-4861-a284-6402290192b8-calico-apiserver-certs podName:4668ea4f-fe79-4861-a284-6402290192b8 nodeName:}" failed. No retries permitted until 2024-10-09 00:59:24.456599679 +0000 UTC m=+68.468671109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/4668ea4f-fe79-4861-a284-6402290192b8-calico-apiserver-certs") pod "calico-apiserver-684f789b48-stzg7" (UID: "4668ea4f-fe79-4861-a284-6402290192b8") : secret "calico-apiserver-certs" not found Oct 9 00:59:24.456782 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:59068.service - OpenSSH per-connection server daemon (10.0.0.1:59068). Oct 9 00:59:24.496460 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 59068 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:24.498240 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:24.502691 systemd-logind[1445]: New session 17 of user core. Oct 9 00:59:24.511699 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 00:59:24.631177 sshd[5014]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:24.635669 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:59068.service: Deactivated successfully. Oct 9 00:59:24.637621 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 00:59:24.638220 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Oct 9 00:59:24.639156 systemd-logind[1445]: Removed session 17. Oct 9 00:59:24.686217 containerd[1456]: time="2024-10-09T00:59:24.686144601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684f789b48-stzg7,Uid:4668ea4f-fe79-4861-a284-6402290192b8,Namespace:calico-apiserver,Attempt:0,}" Oct 9 00:59:24.876227 systemd-networkd[1387]: calid2560e8e49e: Link UP Oct 9 00:59:24.877847 systemd-networkd[1387]: calid2560e8e49e: Gained carrier Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.732 [INFO][5030] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0 calico-apiserver-684f789b48- calico-apiserver 4668ea4f-fe79-4861-a284-6402290192b8 1023 0 2024-10-09 00:59:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684f789b48 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-684f789b48-stzg7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid2560e8e49e [] []}} ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.732 [INFO][5030] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.758 [INFO][5043] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" HandleID="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Workload="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.791 [INFO][5043] ipam_plugin.go 270: Auto assigning IP ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" HandleID="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Workload="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-684f789b48-stzg7", "timestamp":"2024-10-09 00:59:24.758185235 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.791 [INFO][5043] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.791 [INFO][5043] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.791 [INFO][5043] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.793 [INFO][5043] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.796 [INFO][5043] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.800 [INFO][5043] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.801 [INFO][5043] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.803 [INFO][5043] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.803 [INFO][5043] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.805 [INFO][5043] ipam.go 1685: Creating new handle: k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.860 [INFO][5043] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.868 [INFO][5043] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.868 [INFO][5043] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" host="localhost" Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.868 [INFO][5043] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:59:24.889804 containerd[1456]: 2024-10-09 00:59:24.868 [INFO][5043] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" HandleID="k8s-pod-network.1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Workload="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" Oct 9 00:59:24.890405 containerd[1456]: 2024-10-09 00:59:24.872 [INFO][5030] k8s.go 386: Populated endpoint ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0", GenerateName:"calico-apiserver-684f789b48-", Namespace:"calico-apiserver", SelfLink:"", UID:"4668ea4f-fe79-4861-a284-6402290192b8", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684f789b48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-684f789b48-stzg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2560e8e49e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:24.890405 containerd[1456]: 2024-10-09 00:59:24.872 [INFO][5030] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" Oct 9 00:59:24.890405 containerd[1456]: 2024-10-09 00:59:24.872 [INFO][5030] dataplane_linux.go 68: Setting the host side veth name to calid2560e8e49e ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" Oct 9 00:59:24.890405 containerd[1456]: 2024-10-09 00:59:24.877 [INFO][5030] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" Oct 9 00:59:24.890405 containerd[1456]: 2024-10-09 00:59:24.878 [INFO][5030] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0", GenerateName:"calico-apiserver-684f789b48-", Namespace:"calico-apiserver", SelfLink:"", UID:"4668ea4f-fe79-4861-a284-6402290192b8", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684f789b48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df", Pod:"calico-apiserver-684f789b48-stzg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2560e8e49e", MAC:"f6:e6:0e:5e:73:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:59:24.890405 containerd[1456]: 2024-10-09 00:59:24.886 [INFO][5030] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df" Namespace="calico-apiserver" Pod="calico-apiserver-684f789b48-stzg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--684f789b48--stzg7-eth0" Oct 9 00:59:24.913846 containerd[1456]: time="2024-10-09T00:59:24.913741065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:59:24.914051 containerd[1456]: time="2024-10-09T00:59:24.913866886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:59:24.914051 containerd[1456]: time="2024-10-09T00:59:24.913886263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:24.914051 containerd[1456]: time="2024-10-09T00:59:24.913963561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:24.936668 systemd[1]: Started cri-containerd-1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df.scope - libcontainer container 1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df. Oct 9 00:59:24.950432 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:59:24.977136 containerd[1456]: time="2024-10-09T00:59:24.976902975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684f789b48-stzg7,Uid:4668ea4f-fe79-4861-a284-6402290192b8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df\"" Oct 9 00:59:24.979419 containerd[1456]: time="2024-10-09T00:59:24.979376721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 00:59:26.414630 systemd-networkd[1387]: calid2560e8e49e: Gained IPv6LL Oct 9 00:59:27.113647 systemd[1]: run-containerd-runc-k8s.io-5f5eb954db4666565140751207d9000e37f6bbc6e4787cb87f570a0767dab192-runc.n7VGS1.mount: Deactivated successfully. Oct 9 00:59:27.151200 containerd[1456]: time="2024-10-09T00:59:27.151134348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:27.151850 containerd[1456]: time="2024-10-09T00:59:27.151791242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 00:59:27.152993 containerd[1456]: time="2024-10-09T00:59:27.152960593Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:27.155271 containerd[1456]: time="2024-10-09T00:59:27.155232119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:59:27.155848 containerd[1456]: time="2024-10-09T00:59:27.155806354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.176397641s" Oct 9 00:59:27.155848 containerd[1456]: time="2024-10-09T00:59:27.155844437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 00:59:27.157834 containerd[1456]: time="2024-10-09T00:59:27.157777806Z" level=info msg="CreateContainer within sandbox \"1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 00:59:27.169095 containerd[1456]: time="2024-10-09T00:59:27.169048896Z" level=info msg="CreateContainer within sandbox \"1c2a303a022370c0ceb4ccffe92d729c07d01382f1bcf26e944f873e339351df\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2a65f78f3ff27849c4c2b917c260fbef686dc483fa317b94eb0794b9264a8a3\"" Oct 9 00:59:27.169618 containerd[1456]: time="2024-10-09T00:59:27.169528562Z" level=info msg="StartContainer for \"f2a65f78f3ff27849c4c2b917c260fbef686dc483fa317b94eb0794b9264a8a3\"" Oct 9 00:59:27.202638 systemd[1]: Started cri-containerd-f2a65f78f3ff27849c4c2b917c260fbef686dc483fa317b94eb0794b9264a8a3.scope - libcontainer container f2a65f78f3ff27849c4c2b917c260fbef686dc483fa317b94eb0794b9264a8a3. Oct 9 00:59:27.302230 containerd[1456]: time="2024-10-09T00:59:27.302172526Z" level=info msg="StartContainer for \"f2a65f78f3ff27849c4c2b917c260fbef686dc483fa317b94eb0794b9264a8a3\" returns successfully" Oct 9 00:59:27.747233 kubelet[2545]: I1009 00:59:27.747165 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-684f789b48-stzg7" podStartSLOduration=2.569544413 podStartE2EDuration="4.747139471s" podCreationTimestamp="2024-10-09 00:59:23 +0000 UTC" firstStartedPulling="2024-10-09 00:59:24.978943143 +0000 UTC m=+68.991014573" lastFinishedPulling="2024-10-09 00:59:27.156538201 +0000 UTC m=+71.168609631" observedRunningTime="2024-10-09 00:59:27.737767636 +0000 UTC m=+71.749839066" watchObservedRunningTime="2024-10-09 00:59:27.747139471 +0000 UTC m=+71.759210901" Oct 9 00:59:29.645064 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:54352.service - OpenSSH per-connection server daemon (10.0.0.1:54352). Oct 9 00:59:29.702835 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 54352 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:29.704705 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:29.709443 systemd-logind[1445]: New session 18 of user core. Oct 9 00:59:29.717623 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 00:59:29.856894 sshd[5189]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:29.866511 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:54352.service: Deactivated successfully. Oct 9 00:59:29.868675 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 00:59:29.871087 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Oct 9 00:59:29.880772 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:54358.service - OpenSSH per-connection server daemon (10.0.0.1:54358). Oct 9 00:59:29.881953 systemd-logind[1445]: Removed session 18. Oct 9 00:59:29.916194 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 54358 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:29.918092 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:29.923000 systemd-logind[1445]: New session 19 of user core. Oct 9 00:59:29.928651 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 00:59:30.201014 sshd[5203]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:30.214278 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:54358.service: Deactivated successfully. Oct 9 00:59:30.217079 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 00:59:30.219296 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Oct 9 00:59:30.231321 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:54362.service - OpenSSH per-connection server daemon (10.0.0.1:54362). Oct 9 00:59:30.232854 systemd-logind[1445]: Removed session 19. Oct 9 00:59:30.265698 sshd[5218]: Accepted publickey for core from 10.0.0.1 port 54362 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:30.267383 sshd[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:30.271989 systemd-logind[1445]: New session 20 of user core. Oct 9 00:59:30.282610 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 00:59:31.775901 sshd[5218]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:31.786760 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:54362.service: Deactivated successfully. Oct 9 00:59:31.791794 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 00:59:31.795653 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Oct 9 00:59:31.807703 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:54378.service - OpenSSH per-connection server daemon (10.0.0.1:54378). Oct 9 00:59:31.810521 systemd-logind[1445]: Removed session 20. Oct 9 00:59:31.843261 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 54378 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:31.845190 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:31.850316 systemd-logind[1445]: New session 21 of user core. Oct 9 00:59:31.857627 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 00:59:32.110563 sshd[5242]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:32.120207 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:54378.service: Deactivated successfully. Oct 9 00:59:32.122379 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 00:59:32.125806 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Oct 9 00:59:32.132922 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:54380.service - OpenSSH per-connection server daemon (10.0.0.1:54380). Oct 9 00:59:32.134082 systemd-logind[1445]: Removed session 21. Oct 9 00:59:32.168778 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 54380 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:32.170644 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:32.174989 systemd-logind[1445]: New session 22 of user core. Oct 9 00:59:32.186599 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 00:59:32.300324 sshd[5257]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:32.305073 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:54380.service: Deactivated successfully. Oct 9 00:59:32.307366 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 00:59:32.308010 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Oct 9 00:59:32.308991 systemd-logind[1445]: Removed session 22. Oct 9 00:59:34.076188 kubelet[2545]: E1009 00:59:34.076120 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:37.315682 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:39416.service - OpenSSH per-connection server daemon (10.0.0.1:39416). Oct 9 00:59:37.355217 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 39416 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:37.357104 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:37.361581 systemd-logind[1445]: New session 23 of user core. Oct 9 00:59:37.369618 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 00:59:37.490807 sshd[5276]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:37.495358 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:39416.service: Deactivated successfully. Oct 9 00:59:37.497808 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 00:59:37.498462 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Oct 9 00:59:37.499485 systemd-logind[1445]: Removed session 23. Oct 9 00:59:40.076620 kubelet[2545]: E1009 00:59:40.076571 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:42.502650 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:39426.service - OpenSSH per-connection server daemon (10.0.0.1:39426). Oct 9 00:59:42.543264 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 39426 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:42.544820 sshd[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:42.548791 systemd-logind[1445]: New session 24 of user core. Oct 9 00:59:42.553572 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 00:59:42.666223 sshd[5295]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:42.670912 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:39426.service: Deactivated successfully. Oct 9 00:59:42.673191 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 00:59:42.673869 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Oct 9 00:59:42.674774 systemd-logind[1445]: Removed session 24. Oct 9 00:59:47.076835 kubelet[2545]: E1009 00:59:47.076775 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:47.679146 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:55458.service - OpenSSH per-connection server daemon (10.0.0.1:55458). Oct 9 00:59:47.719817 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 55458 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:47.721549 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:47.726186 systemd-logind[1445]: New session 25 of user core. Oct 9 00:59:47.733600 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 00:59:47.866632 sshd[5333]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:47.870844 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:55458.service: Deactivated successfully. Oct 9 00:59:47.873194 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 00:59:47.873928 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Oct 9 00:59:47.875077 systemd-logind[1445]: Removed session 25. Oct 9 00:59:52.884850 systemd[1]: Started sshd@25-10.0.0.52:22-10.0.0.1:55470.service - OpenSSH per-connection server daemon (10.0.0.1:55470). Oct 9 00:59:52.920330 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 55470 ssh2: RSA SHA256:KS4HVlwIXjIoLzXqtuCR/CQI5hNukykBIcj/y/M03ug Oct 9 00:59:52.922028 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:52.926007 systemd-logind[1445]: New session 26 of user core. Oct 9 00:59:52.932588 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 00:59:53.047030 sshd[5373]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:53.051827 systemd[1]: sshd@25-10.0.0.52:22-10.0.0.1:55470.service: Deactivated successfully. Oct 9 00:59:53.054122 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 00:59:53.054848 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Oct 9 00:59:53.055885 systemd-logind[1445]: Removed session 26. Oct 9 00:59:54.076705 kubelet[2545]: E1009 00:59:54.076660 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"