Oct 9 07:17:07.930712 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:17:07.930735 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:17:07.930747 kernel: BIOS-provided physical RAM map: Oct 9 07:17:07.930753 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:17:07.930759 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:17:07.930766 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:17:07.930773 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 9 07:17:07.930779 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 9 07:17:07.930785 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 07:17:07.930794 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 07:17:07.930804 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:17:07.930811 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:17:07.930817 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 9 07:17:07.930823 kernel: NX (Execute Disable) protection: active Oct 9 07:17:07.930831 kernel: APIC: Static calls initialized Oct 9 07:17:07.930840 kernel: SMBIOS 2.8 present. Oct 9 07:17:07.930849 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 9 07:17:07.930856 kernel: Hypervisor detected: KVM Oct 9 07:17:07.930863 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:17:07.930870 kernel: kvm-clock: using sched offset of 2792415990 cycles Oct 9 07:17:07.930877 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:17:07.930884 kernel: tsc: Detected 2794.750 MHz processor Oct 9 07:17:07.930891 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:17:07.930898 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:17:07.930908 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 9 07:17:07.930915 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:17:07.930922 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:17:07.930929 kernel: Using GB pages for direct mapping Oct 9 07:17:07.930936 kernel: ACPI: Early table checksum verification disabled Oct 9 07:17:07.930943 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 9 07:17:07.930950 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:17:07.930957 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:17:07.930964 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:17:07.930973 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 9 07:17:07.930980 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:17:07.930987 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:17:07.930994 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:17:07.931001 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:17:07.931008 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Oct 9 07:17:07.931015 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Oct 9 07:17:07.931029 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 9 07:17:07.931039 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Oct 9 07:17:07.931046 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Oct 9 07:17:07.931053 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Oct 9 07:17:07.931060 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Oct 9 07:17:07.931067 kernel: No NUMA configuration found Oct 9 07:17:07.931075 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 9 07:17:07.931084 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 9 07:17:07.931092 kernel: Zone ranges: Oct 9 07:17:07.931099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:17:07.931106 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 9 07:17:07.931113 kernel: Normal empty Oct 9 07:17:07.931121 kernel: Movable zone start for each node Oct 9 07:17:07.931128 kernel: Early memory node ranges Oct 9 07:17:07.931135 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:17:07.931142 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 9 07:17:07.931157 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 9 07:17:07.931169 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:17:07.931176 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:17:07.931183 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 9 07:17:07.931191 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:17:07.931198 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:17:07.931205 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:17:07.931212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:17:07.931220 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:17:07.931227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:17:07.931237 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:17:07.931244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:17:07.931251 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:17:07.931258 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:17:07.931266 kernel: TSC deadline timer available Oct 9 07:17:07.931273 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 07:17:07.931291 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:17:07.931298 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 07:17:07.931306 kernel: kvm-guest: setup PV sched yield Oct 9 07:17:07.931318 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 07:17:07.931326 kernel: Booting paravirtualized kernel on KVM Oct 9 07:17:07.931333 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:17:07.931341 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 07:17:07.931348 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 07:17:07.931355 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 07:17:07.931362 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 07:17:07.931369 kernel: kvm-guest: PV spinlocks enabled Oct 9 07:17:07.931377 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 07:17:07.931388 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:17:07.931395 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:17:07.931403 kernel: random: crng init done Oct 9 07:17:07.931410 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 07:17:07.931417 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:17:07.931424 kernel: Fallback order for Node 0: 0 Oct 9 07:17:07.931432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 9 07:17:07.931439 kernel: Policy zone: DMA32 Oct 9 07:17:07.931449 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:17:07.931456 kernel: Memory: 2428448K/2571752K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 143044K reserved, 0K cma-reserved) Oct 9 07:17:07.931464 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 07:17:07.931471 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:17:07.931478 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:17:07.931486 kernel: Dynamic Preempt: voluntary Oct 9 07:17:07.931493 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:17:07.931501 kernel: rcu: RCU event tracing is enabled. Oct 9 07:17:07.931508 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 07:17:07.931518 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:17:07.931526 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:17:07.931533 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:17:07.931543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:17:07.931550 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 07:17:07.931558 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 07:17:07.931565 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:17:07.931572 kernel: Console: colour VGA+ 80x25 Oct 9 07:17:07.931579 kernel: printk: console [ttyS0] enabled Oct 9 07:17:07.931589 kernel: ACPI: Core revision 20230628 Oct 9 07:17:07.931596 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:17:07.931604 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:17:07.931611 kernel: x2apic enabled Oct 9 07:17:07.931618 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:17:07.931625 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 07:17:07.931633 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 07:17:07.931640 kernel: kvm-guest: setup PV IPIs Oct 9 07:17:07.931658 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:17:07.931666 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 07:17:07.931674 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 9 07:17:07.931681 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 07:17:07.931691 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 07:17:07.931698 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 07:17:07.931706 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:17:07.931713 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:17:07.931721 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:17:07.931731 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:17:07.931739 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 07:17:07.931746 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 07:17:07.931756 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:17:07.931764 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:17:07.931772 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 07:17:07.931780 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 07:17:07.931788 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 07:17:07.931798 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:17:07.931805 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:17:07.931813 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:17:07.931820 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:17:07.931828 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 07:17:07.931835 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:17:07.931843 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:17:07.931850 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:17:07.931858 kernel: SELinux: Initializing. Oct 9 07:17:07.931868 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:17:07.931875 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 07:17:07.931883 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 07:17:07.931891 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:17:07.931898 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:17:07.931906 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:17:07.931916 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 07:17:07.931924 kernel: ... version: 0 Oct 9 07:17:07.931931 kernel: ... bit width: 48 Oct 9 07:17:07.931942 kernel: ... generic registers: 6 Oct 9 07:17:07.931949 kernel: ... value mask: 0000ffffffffffff Oct 9 07:17:07.931957 kernel: ... max period: 00007fffffffffff Oct 9 07:17:07.931964 kernel: ... fixed-purpose events: 0 Oct 9 07:17:07.931972 kernel: ... event mask: 000000000000003f Oct 9 07:17:07.931979 kernel: signal: max sigframe size: 1776 Oct 9 07:17:07.931987 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:17:07.931994 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:17:07.932002 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:17:07.932012 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:17:07.932020 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 07:17:07.932027 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 07:17:07.932034 kernel: smpboot: Max logical packages: 1 Oct 9 07:17:07.932042 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 9 07:17:07.932049 kernel: devtmpfs: initialized Oct 9 07:17:07.932057 kernel: x86/mm: Memory block size: 128MB Oct 9 07:17:07.932065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:17:07.932072 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 07:17:07.932082 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:17:07.932090 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:17:07.932098 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:17:07.932105 kernel: audit: type=2000 audit(1728458227.819:1): state=initialized audit_enabled=0 res=1 Oct 9 07:17:07.932113 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:17:07.932120 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:17:07.932128 kernel: cpuidle: using governor menu Oct 9 07:17:07.932135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:17:07.932143 kernel: dca service started, version 1.12.1 Oct 9 07:17:07.932161 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 07:17:07.932169 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 07:17:07.932176 kernel: PCI: Using configuration type 1 for base access Oct 9 07:17:07.932184 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:17:07.932191 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 07:17:07.932199 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 07:17:07.932207 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:17:07.932214 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:17:07.932222 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:17:07.932232 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:17:07.932239 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:17:07.932247 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:17:07.932254 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:17:07.932262 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:17:07.932269 kernel: ACPI: Interpreter enabled Oct 9 07:17:07.932277 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 07:17:07.932319 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:17:07.932327 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:17:07.932338 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:17:07.932346 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 07:17:07.932354 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:17:07.932588 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:17:07.932732 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 07:17:07.932964 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 07:17:07.932976 kernel: PCI host bridge to bus 0000:00 Oct 9 07:17:07.933116 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:17:07.933249 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:17:07.933382 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:17:07.933497 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 07:17:07.933610 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 07:17:07.933726 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 9 07:17:07.933841 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:17:07.934002 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 07:17:07.934200 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 07:17:07.934383 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 9 07:17:07.934516 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 9 07:17:07.934705 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 9 07:17:07.934837 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:17:07.934986 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 07:17:07.935123 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 9 07:17:07.935265 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 9 07:17:07.935413 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 9 07:17:07.935571 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:17:07.935699 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:17:07.935828 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 9 07:17:07.935999 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 9 07:17:07.936176 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:17:07.936322 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 9 07:17:07.936453 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 9 07:17:07.936579 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 9 07:17:07.936704 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 9 07:17:07.936852 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 07:17:07.937013 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 07:17:07.937191 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 07:17:07.937339 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 9 07:17:07.937482 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 9 07:17:07.937626 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 07:17:07.937755 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 07:17:07.937765 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:17:07.937779 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:17:07.937788 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:17:07.937796 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:17:07.937803 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 07:17:07.937811 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 07:17:07.937819 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 07:17:07.937827 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 07:17:07.937834 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 07:17:07.937842 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 07:17:07.937852 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 07:17:07.937860 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 07:17:07.937868 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 07:17:07.937876 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 07:17:07.937883 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 07:17:07.937891 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 07:17:07.937899 kernel: iommu: Default domain type: Translated Oct 9 07:17:07.937907 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:17:07.937914 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:17:07.937925 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:17:07.937933 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:17:07.937941 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 9 07:17:07.938067 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 07:17:07.938202 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 07:17:07.938349 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:17:07.938360 kernel: vgaarb: loaded Oct 9 07:17:07.938368 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:17:07.938381 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:17:07.938389 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:17:07.938397 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:17:07.938405 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:17:07.938412 kernel: pnp: PnP ACPI init Oct 9 07:17:07.938567 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 07:17:07.938580 kernel: pnp: PnP ACPI: found 6 devices Oct 9 07:17:07.938588 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:17:07.938599 kernel: NET: Registered PF_INET protocol family Oct 9 07:17:07.938607 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 07:17:07.938615 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 07:17:07.938623 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:17:07.938631 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:17:07.938639 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 07:17:07.938646 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 07:17:07.938654 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:17:07.938662 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 07:17:07.938673 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:17:07.938681 kernel: NET: Registered PF_XDP protocol family Oct 9 07:17:07.938800 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:17:07.938914 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:17:07.939029 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:17:07.939143 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 07:17:07.939267 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 07:17:07.939432 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 9 07:17:07.939449 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:17:07.939458 kernel: Initialise system trusted keyrings Oct 9 07:17:07.939466 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 07:17:07.939474 kernel: Key type asymmetric registered Oct 9 07:17:07.939481 kernel: Asymmetric key parser 'x509' registered Oct 9 07:17:07.939489 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:17:07.939497 kernel: io scheduler mq-deadline registered Oct 9 07:17:07.939505 kernel: io scheduler kyber registered Oct 9 07:17:07.939513 kernel: io scheduler bfq registered Oct 9 07:17:07.939521 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:17:07.939531 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 07:17:07.939539 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 07:17:07.939547 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 07:17:07.939555 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:17:07.939563 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:17:07.939571 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:17:07.939579 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:17:07.939586 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:17:07.939745 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 07:17:07.939761 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:17:07.939880 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 07:17:07.939999 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T07:17:07 UTC (1728458227) Oct 9 07:17:07.940119 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 07:17:07.940130 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 07:17:07.940138 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:17:07.940154 kernel: Segment Routing with IPv6 Oct 9 07:17:07.940162 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:17:07.940174 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:17:07.940182 kernel: Key type dns_resolver registered Oct 9 07:17:07.940190 kernel: IPI shorthand broadcast: enabled Oct 9 07:17:07.940198 kernel: sched_clock: Marking stable (1072003341, 129275707)->(1225974572, -24695524) Oct 9 07:17:07.940205 kernel: registered taskstats version 1 Oct 9 07:17:07.940213 kernel: Loading compiled-in X.509 certificates Oct 9 07:17:07.940221 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:17:07.940228 kernel: Key type .fscrypt registered Oct 9 07:17:07.940236 kernel: Key type fscrypt-provisioning registered Oct 9 07:17:07.940246 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:17:07.940255 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:17:07.940262 kernel: ima: No architecture policies found Oct 9 07:17:07.940270 kernel: clk: Disabling unused clocks Oct 9 07:17:07.940291 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:17:07.940300 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:17:07.940307 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:17:07.940315 kernel: Run /init as init process Oct 9 07:17:07.940326 kernel: with arguments: Oct 9 07:17:07.940334 kernel: /init Oct 9 07:17:07.940341 kernel: with environment: Oct 9 07:17:07.940349 kernel: HOME=/ Oct 9 07:17:07.940356 kernel: TERM=linux Oct 9 07:17:07.940364 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:17:07.940374 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:17:07.940384 systemd[1]: Detected virtualization kvm. Oct 9 07:17:07.940395 systemd[1]: Detected architecture x86-64. Oct 9 07:17:07.940404 systemd[1]: Running in initrd. Oct 9 07:17:07.940412 systemd[1]: No hostname configured, using default hostname. Oct 9 07:17:07.940420 systemd[1]: Hostname set to . Oct 9 07:17:07.940428 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:17:07.940436 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:17:07.940445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:17:07.940453 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:17:07.940465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:17:07.940473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:17:07.940494 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:17:07.940505 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:17:07.940515 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:17:07.940526 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:17:07.940535 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:17:07.940544 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:17:07.940552 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:17:07.940561 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:17:07.940569 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:17:07.940577 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:17:07.940586 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:17:07.940597 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:17:07.940605 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:17:07.940614 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:17:07.940623 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:17:07.940631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:17:07.940640 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:17:07.940648 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:17:07.940657 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:17:07.940665 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:17:07.940677 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:17:07.940685 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:17:07.940694 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:17:07.940702 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:17:07.940711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:17:07.940719 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:17:07.940728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:17:07.940737 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:17:07.940748 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:17:07.940780 systemd-journald[192]: Collecting audit messages is disabled. Oct 9 07:17:07.940804 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:17:07.940815 systemd-journald[192]: Journal started Oct 9 07:17:07.940837 systemd-journald[192]: Runtime Journal (/run/log/journal/f93d37ef7cf6402798f697fd5301d3dc) is 6.0M, max 48.4M, 42.3M free. Oct 9 07:17:07.934678 systemd-modules-load[193]: Inserted module 'overlay' Oct 9 07:17:07.973666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:17:07.973686 kernel: Bridge firewalling registered Oct 9 07:17:07.972718 systemd-modules-load[193]: Inserted module 'br_netfilter' Oct 9 07:17:07.976498 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:17:07.976976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:17:07.979492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:17:07.995445 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:17:07.998910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:17:07.999842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:17:08.002495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:17:08.013238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:17:08.018058 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:17:08.021651 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:17:08.024604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:17:08.038569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:17:08.041264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:17:08.052667 dracut-cmdline[228]: dracut-dracut-053 Oct 9 07:17:08.059472 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:17:08.092792 systemd-resolved[230]: Positive Trust Anchors: Oct 9 07:17:08.092811 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:17:08.092842 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:17:08.095539 systemd-resolved[230]: Defaulting to hostname 'linux'. Oct 9 07:17:08.096742 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:17:08.102587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:17:08.147329 kernel: SCSI subsystem initialized Oct 9 07:17:08.158318 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:17:08.170318 kernel: iscsi: registered transport (tcp) Oct 9 07:17:08.196323 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:17:08.196377 kernel: QLogic iSCSI HBA Driver Oct 9 07:17:08.267162 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:17:08.281468 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:17:08.312175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:17:08.312274 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:17:08.312299 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:17:08.360333 kernel: raid6: avx2x4 gen() 25973 MB/s Oct 9 07:17:08.377302 kernel: raid6: avx2x2 gen() 25779 MB/s Oct 9 07:17:08.394411 kernel: raid6: avx2x1 gen() 25784 MB/s Oct 9 07:17:08.394443 kernel: raid6: using algorithm avx2x4 gen() 25973 MB/s Oct 9 07:17:08.412384 kernel: raid6: .... xor() 7415 MB/s, rmw enabled Oct 9 07:17:08.412407 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:17:08.438310 kernel: xor: automatically using best checksumming function avx Oct 9 07:17:08.623333 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:17:08.638325 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:17:08.651556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:17:08.664976 systemd-udevd[415]: Using default interface naming scheme 'v255'. Oct 9 07:17:08.669913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:17:08.682440 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:17:08.698582 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Oct 9 07:17:08.737958 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:17:08.752470 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:17:08.826037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:17:08.840462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:17:08.853864 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:17:08.856238 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:17:08.858787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:17:08.864512 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 07:17:08.861395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:17:08.866330 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:17:08.870301 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 07:17:08.874062 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:17:08.874081 kernel: GPT:9289727 != 19775487 Oct 9 07:17:08.874092 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:17:08.874103 kernel: GPT:9289727 != 19775487 Oct 9 07:17:08.874129 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:17:08.874140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:17:08.872086 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:17:08.890357 kernel: libata version 3.00 loaded. Oct 9 07:17:08.893088 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:17:08.904571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:17:08.904695 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:17:08.910340 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 07:17:08.912479 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 07:17:08.912513 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:17:08.912537 kernel: AES CTR mode by8 optimization enabled Oct 9 07:17:08.909261 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:17:08.912402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:17:08.912534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:17:08.922331 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 07:17:08.922533 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 07:17:08.913970 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:17:08.927095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:17:08.933326 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (457) Oct 9 07:17:08.937305 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (476) Oct 9 07:17:09.014971 kernel: scsi host0: ahci Oct 9 07:17:09.017312 kernel: scsi host1: ahci Oct 9 07:17:09.017586 kernel: scsi host2: ahci Oct 9 07:17:09.022095 kernel: scsi host3: ahci Oct 9 07:17:09.024300 kernel: scsi host4: ahci Oct 9 07:17:09.026302 kernel: scsi host5: ahci Oct 9 07:17:09.026484 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 9 07:17:09.026498 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 9 07:17:09.026508 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 9 07:17:09.026519 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 9 07:17:09.026529 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 9 07:17:09.026539 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 9 07:17:09.026975 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:17:09.065673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:17:09.066089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:17:09.074691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:17:09.074820 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:17:09.082852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:17:09.096480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:17:09.099126 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:17:09.107852 disk-uuid[568]: Primary Header is updated. Oct 9 07:17:09.107852 disk-uuid[568]: Secondary Entries is updated. Oct 9 07:17:09.107852 disk-uuid[568]: Secondary Header is updated. Oct 9 07:17:09.111321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:17:09.116311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:17:09.120312 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:17:09.131550 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:17:09.337474 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 07:17:09.337561 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 07:17:09.337590 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 07:17:09.339329 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 07:17:09.339424 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 07:17:09.340316 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 07:17:09.341370 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 07:17:09.341389 kernel: ata3.00: applying bridge limits Oct 9 07:17:09.342562 kernel: ata3.00: configured for UDMA/100 Oct 9 07:17:09.343310 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 07:17:09.396348 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 07:17:09.396732 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 07:17:09.410312 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 07:17:10.121339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:17:10.121499 disk-uuid[570]: The operation has completed successfully. Oct 9 07:17:10.154106 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:17:10.154320 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:17:10.192526 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:17:10.196508 sh[595]: Success Oct 9 07:17:10.212318 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 07:17:10.252601 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:17:10.278264 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:17:10.281870 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:17:10.313374 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:17:10.313420 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:17:10.313431 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:17:10.314446 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:17:10.315836 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:17:10.319805 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:17:10.320612 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:17:10.333423 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:17:10.336048 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:17:10.345677 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:17:10.345705 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:17:10.345716 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:17:10.349316 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:17:10.360359 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:17:10.362119 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:17:10.371858 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:17:10.376847 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:17:10.494618 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:17:10.503252 ignition[683]: Ignition 2.18.0 Oct 9 07:17:10.503266 ignition[683]: Stage: fetch-offline Oct 9 07:17:10.503566 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:17:10.503339 ignition[683]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:17:10.503352 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:17:10.503603 ignition[683]: parsed url from cmdline: "" Oct 9 07:17:10.503607 ignition[683]: no config URL provided Oct 9 07:17:10.503613 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:17:10.503623 ignition[683]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:17:10.503657 ignition[683]: op(1): [started] loading QEMU firmware config module Oct 9 07:17:10.503662 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 07:17:10.512795 ignition[683]: op(1): [finished] loading QEMU firmware config module Oct 9 07:17:10.529828 systemd-networkd[783]: lo: Link UP Oct 9 07:17:10.529836 systemd-networkd[783]: lo: Gained carrier Oct 9 07:17:10.533318 systemd-networkd[783]: Enumeration completed Oct 9 07:17:10.534295 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:17:10.535632 systemd[1]: Reached target network.target - Network. Oct 9 07:17:10.538894 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:17:10.538900 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:17:10.543468 systemd-networkd[783]: eth0: Link UP Oct 9 07:17:10.543475 systemd-networkd[783]: eth0: Gained carrier Oct 9 07:17:10.543482 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:17:10.564334 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:17:10.569466 ignition[683]: parsing config with SHA512: baddbcf180fdd5ebdee754ee99d903025b0c953e05eee33777033abe291b2c2223f55535572f0ba02c066c3cb1ab48c212d812691922908a76b4ac3620e092b7 Oct 9 07:17:10.575263 unknown[683]: fetched base config from "system" Oct 9 07:17:10.575274 unknown[683]: fetched user config from "qemu" Oct 9 07:17:10.575643 ignition[683]: fetch-offline: fetch-offline passed Oct 9 07:17:10.575729 ignition[683]: Ignition finished successfully Oct 9 07:17:10.580757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:17:10.581030 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 07:17:10.590468 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:17:10.613269 ignition[788]: Ignition 2.18.0 Oct 9 07:17:10.613298 ignition[788]: Stage: kargs Oct 9 07:17:10.613494 ignition[788]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:17:10.613507 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:17:10.614465 ignition[788]: kargs: kargs passed Oct 9 07:17:10.614515 ignition[788]: Ignition finished successfully Oct 9 07:17:10.621112 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:17:10.629510 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:17:10.645572 ignition[797]: Ignition 2.18.0 Oct 9 07:17:10.645585 ignition[797]: Stage: disks Oct 9 07:17:10.645766 ignition[797]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:17:10.645778 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:17:10.649533 ignition[797]: disks: disks passed Oct 9 07:17:10.649588 ignition[797]: Ignition finished successfully Oct 9 07:17:10.667958 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:17:10.670103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:17:10.670179 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:17:10.672299 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:17:10.674575 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:17:10.676428 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:17:10.688426 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:17:10.716257 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:17:10.794876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:17:10.802473 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:17:10.908308 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:17:10.908391 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:17:10.910699 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:17:10.922354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:17:10.923262 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:17:10.925043 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 07:17:10.925089 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:17:10.936071 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Oct 9 07:17:10.936089 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:17:10.936102 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:17:10.936113 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:17:10.925110 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:17:10.932308 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:17:10.940300 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:17:10.937122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:17:10.942654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:17:10.980873 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:17:10.986535 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:17:10.992319 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:17:10.997168 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:17:11.083067 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:17:11.098475 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:17:11.100259 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:17:11.108308 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:17:11.127458 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:17:11.133294 ignition[931]: INFO : Ignition 2.18.0 Oct 9 07:17:11.133294 ignition[931]: INFO : Stage: mount Oct 9 07:17:11.134966 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:17:11.134966 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:17:11.137849 ignition[931]: INFO : mount: mount passed Oct 9 07:17:11.138625 ignition[931]: INFO : Ignition finished successfully Oct 9 07:17:11.141608 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:17:11.149649 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:17:11.312614 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:17:11.329435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:17:11.336311 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Oct 9 07:17:11.336348 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:17:11.337312 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:17:11.338772 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:17:11.341320 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:17:11.342869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:17:11.364038 ignition[960]: INFO : Ignition 2.18.0 Oct 9 07:17:11.364038 ignition[960]: INFO : Stage: files Oct 9 07:17:11.365885 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:17:11.365885 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:17:11.365885 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:17:11.369300 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:17:11.369300 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:17:11.373822 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:17:11.375305 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:17:11.377021 unknown[960]: wrote ssh authorized keys file for user: core Oct 9 07:17:11.378138 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:17:11.380672 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:17:11.382603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:17:11.425479 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:17:11.532437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:17:11.534603 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Oct 9 07:17:11.863162 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:17:12.242690 systemd-networkd[783]: eth0: Gained IPv6LL Oct 9 07:17:12.284768 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Oct 9 07:17:12.284768 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:17:12.288591 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:17:12.290794 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:17:12.290794 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:17:12.290794 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 07:17:12.295102 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:17:12.296981 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 07:17:12.296981 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 07:17:12.300107 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 07:17:12.322548 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:17:12.329086 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 07:17:12.330878 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 07:17:12.330878 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:17:12.330878 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:17:12.330878 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:17:12.330878 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:17:12.330878 ignition[960]: INFO : files: files passed Oct 9 07:17:12.330878 ignition[960]: INFO : Ignition finished successfully Oct 9 07:17:12.342335 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:17:12.361410 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:17:12.363902 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:17:12.365216 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:17:12.365348 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:17:12.388772 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 07:17:12.393255 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:17:12.394911 initrd-setup-root-after-ignition[994]: grep: Oct 9 07:17:12.395709 initrd-setup-root-after-ignition[994]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:17:12.397108 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:17:12.398405 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:17:12.400036 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:17:12.415439 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:17:12.442499 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:17:12.442631 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:17:12.443742 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:17:12.446905 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:17:12.448841 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:17:12.451383 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:17:12.469594 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:17:12.478444 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:17:12.489844 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:17:12.489984 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:17:12.493353 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:17:12.495369 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:17:12.495487 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:17:12.496700 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:17:12.499328 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:17:12.501986 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:17:12.502866 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:17:12.503203 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:17:12.507049 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:17:12.507534 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:17:12.511122 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:17:12.514528 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:17:12.516496 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:17:12.518221 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:17:12.518349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:17:12.520997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:17:12.523115 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:17:12.525297 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:17:12.525382 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:17:12.527614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:17:12.527725 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:17:12.530927 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:17:12.531048 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:17:12.533096 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:17:12.534873 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:17:12.538330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:17:12.540114 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:17:12.542435 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:17:12.543381 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:17:12.543473 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:17:12.543871 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:17:12.543956 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:17:12.547733 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:17:12.547845 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:17:12.549835 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:17:12.549940 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:17:12.560410 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:17:12.560484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:17:12.560596 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:17:12.564161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:17:12.565238 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:17:12.565368 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:17:12.567862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:17:12.568026 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:17:12.575751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:17:12.575885 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:17:12.593586 ignition[1016]: INFO : Ignition 2.18.0 Oct 9 07:17:12.593586 ignition[1016]: INFO : Stage: umount Oct 9 07:17:12.595304 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:17:12.595304 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 07:17:12.595225 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:17:12.599203 ignition[1016]: INFO : umount: umount passed Oct 9 07:17:12.600142 ignition[1016]: INFO : Ignition finished successfully Oct 9 07:17:12.603319 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:17:12.603487 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:17:12.604694 systemd[1]: Stopped target network.target - Network. Oct 9 07:17:12.607157 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:17:12.607224 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:17:12.608099 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:17:12.608150 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:17:12.608616 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:17:12.608667 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:17:12.608934 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:17:12.608980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:17:12.609466 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:17:12.609919 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:17:12.619858 systemd-networkd[783]: eth0: DHCPv6 lease lost Oct 9 07:17:12.623225 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:17:12.623439 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:17:12.625545 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:17:12.625695 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:17:12.628069 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:17:12.628154 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:17:12.644435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:17:12.644521 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:17:12.644586 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:17:12.647605 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:17:12.647663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:17:12.651202 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:17:12.651260 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:17:12.653588 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:17:12.653652 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:17:12.655845 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:17:12.692604 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:17:12.692804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:17:12.694622 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:17:12.694712 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:17:12.697271 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:17:12.697327 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:17:12.699330 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:17:12.699385 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:17:12.703509 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:17:12.703585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:17:12.706798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:17:12.706867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:17:12.727440 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:17:12.727527 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:17:12.727592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:17:12.730979 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:17:12.731047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:17:12.733731 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:17:12.733854 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:17:12.735914 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:17:12.736035 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:17:12.738750 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:17:12.738874 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:17:12.741219 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:17:12.742962 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:17:12.743032 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:17:12.755428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:17:12.762990 systemd[1]: Switching root. Oct 9 07:17:12.788005 systemd-journald[192]: Journal stopped Oct 9 07:17:13.820933 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Oct 9 07:17:13.821013 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:17:13.821039 kernel: SELinux: policy capability open_perms=1 Oct 9 07:17:13.821051 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:17:13.821063 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:17:13.821074 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:17:13.821086 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:17:13.821098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:17:13.821119 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:17:13.821131 kernel: audit: type=1403 audit(1728458233.083:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:17:13.821149 systemd[1]: Successfully loaded SELinux policy in 40.682ms. Oct 9 07:17:13.821176 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.162ms. Oct 9 07:17:13.821190 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:17:13.821205 systemd[1]: Detected virtualization kvm. Oct 9 07:17:13.821217 systemd[1]: Detected architecture x86-64. Oct 9 07:17:13.821230 systemd[1]: Detected first boot. Oct 9 07:17:13.821247 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:17:13.821260 zram_generator::config[1061]: No configuration found. Oct 9 07:17:13.821273 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:17:13.821392 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:17:13.821406 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:17:13.821419 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:17:13.821431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:17:13.821444 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:17:13.821456 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:17:13.821475 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:17:13.821488 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:17:13.821500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:17:13.821513 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:17:13.821525 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:17:13.821537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:17:13.821549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:17:13.821562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:17:13.821580 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:17:13.821593 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:17:13.821605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:17:13.821617 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:17:13.821630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:17:13.821642 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:17:13.821655 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:17:13.821667 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:17:13.821685 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:17:13.821697 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:17:13.821709 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:17:13.821723 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:17:13.821735 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:17:13.821747 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:17:13.821759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:17:13.821772 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:17:13.821784 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:17:13.821801 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:17:13.821813 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:17:13.821825 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:17:13.821838 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:17:13.821853 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:17:13.821865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:13.821878 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:17:13.821890 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:17:13.821902 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:17:13.821920 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:17:13.821933 systemd[1]: Reached target machines.target - Containers. Oct 9 07:17:13.821945 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:17:13.821958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:17:13.821970 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:17:13.821991 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:17:13.822005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:17:13.822018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:17:13.822036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:17:13.822048 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:17:13.822060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:17:13.822073 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:17:13.822085 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:17:13.822097 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:17:13.822109 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:17:13.822121 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:17:13.822133 kernel: loop: module loaded Oct 9 07:17:13.822151 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:17:13.822163 kernel: fuse: init (API version 7.39) Oct 9 07:17:13.822175 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:17:13.822187 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:17:13.822200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:17:13.822230 systemd-journald[1130]: Collecting audit messages is disabled. Oct 9 07:17:13.822253 systemd-journald[1130]: Journal started Oct 9 07:17:13.822294 systemd-journald[1130]: Runtime Journal (/run/log/journal/f93d37ef7cf6402798f697fd5301d3dc) is 6.0M, max 48.4M, 42.3M free. Oct 9 07:17:13.823500 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:17:13.594031 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:17:13.611132 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:17:13.611669 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:17:13.826753 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:17:13.826815 systemd[1]: Stopped verity-setup.service. Oct 9 07:17:13.829373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:13.832472 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:17:13.833543 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:17:13.835305 kernel: ACPI: bus type drm_connector registered Oct 9 07:17:13.835478 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:17:13.836856 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:17:13.838068 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:17:13.839454 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:17:13.840820 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:17:13.842118 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:17:13.843780 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:17:13.843975 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:17:13.845602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:17:13.845783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:17:13.849565 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:17:13.849754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:17:13.851306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:17:13.851503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:17:13.853135 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:17:13.853346 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:17:13.854896 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:17:13.855083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:17:13.856553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:17:13.858230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:17:13.859933 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:17:13.877803 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:17:13.886508 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:17:13.889917 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:17:13.891143 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:17:13.891189 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:17:13.893426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:17:13.902484 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:17:13.904933 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:17:13.910572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:17:13.917507 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:17:13.922998 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:17:13.924322 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:17:13.927184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:17:13.928605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:17:13.932128 systemd-journald[1130]: Time spent on flushing to /var/log/journal/f93d37ef7cf6402798f697fd5301d3dc is 12.793ms for 947 entries. Oct 9 07:17:13.932128 systemd-journald[1130]: System Journal (/var/log/journal/f93d37ef7cf6402798f697fd5301d3dc) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:17:14.177739 systemd-journald[1130]: Received client request to flush runtime journal. Oct 9 07:17:14.177788 kernel: loop0: detected capacity change from 0 to 80568 Oct 9 07:17:14.177818 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:17:14.177990 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:17:14.178012 kernel: loop1: detected capacity change from 0 to 205544 Oct 9 07:17:14.178032 kernel: loop2: detected capacity change from 0 to 139904 Oct 9 07:17:13.931903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:17:13.936121 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:17:13.939246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:17:13.943843 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:17:13.945310 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:17:13.946967 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:17:13.961757 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:17:13.965645 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:17:13.967646 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:17:13.983480 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:17:13.995700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:17:14.004894 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:17:14.013494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:17:14.082905 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Oct 9 07:17:14.082919 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Oct 9 07:17:14.089016 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:17:14.096498 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:17:14.098208 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:17:14.107867 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:17:14.180206 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:17:14.197613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:17:14.198335 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:17:14.219315 kernel: loop3: detected capacity change from 0 to 80568 Oct 9 07:17:14.226303 kernel: loop4: detected capacity change from 0 to 205544 Oct 9 07:17:14.234319 kernel: loop5: detected capacity change from 0 to 139904 Oct 9 07:17:14.242749 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 07:17:14.244161 (sd-merge)[1198]: Merged extensions into '/usr'. Oct 9 07:17:14.248503 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:17:14.248524 systemd[1]: Reloading... Oct 9 07:17:14.313313 zram_generator::config[1225]: No configuration found. Oct 9 07:17:14.381654 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:17:14.435159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:17:14.483831 systemd[1]: Reloading finished in 234 ms. Oct 9 07:17:14.517065 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:17:14.518685 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:17:14.535472 systemd[1]: Starting ensure-sysext.service... Oct 9 07:17:14.537452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:17:14.545436 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:17:14.545447 systemd[1]: Reloading... Oct 9 07:17:14.683212 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:17:14.683618 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:17:14.684722 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:17:14.685051 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Oct 9 07:17:14.685127 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Oct 9 07:17:14.695193 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:17:14.695209 systemd-tmpfiles[1260]: Skipping /boot Oct 9 07:17:14.699413 zram_generator::config[1284]: No configuration found. Oct 9 07:17:14.709553 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:17:14.709567 systemd-tmpfiles[1260]: Skipping /boot Oct 9 07:17:14.818655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:17:14.869770 systemd[1]: Reloading finished in 323 ms. Oct 9 07:17:14.889984 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:17:14.902749 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:17:14.911471 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:17:14.913963 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:17:14.916324 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:17:14.921394 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:17:14.932539 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:17:14.935757 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:17:14.940146 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:14.940337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:17:14.946525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:17:14.949623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:17:14.953403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:17:14.955108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:17:14.958686 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Oct 9 07:17:14.959420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:17:14.960758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:14.962455 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:17:14.964693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:17:14.964874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:17:14.968758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:17:14.969030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:17:14.970962 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:17:14.971134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:17:14.973342 augenrules[1348]: No rules Oct 9 07:17:14.974505 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:17:14.983776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:14.984021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:17:14.993526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:17:14.996815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:17:15.000540 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:17:15.001698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:17:15.003570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:17:15.005193 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:15.006048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:17:15.008180 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:17:15.010185 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:17:15.011898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:17:15.012348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:17:15.014028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:17:15.014218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:17:15.022249 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:17:15.031721 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:17:15.031928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:17:15.045407 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:15.045623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:17:15.056533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:17:15.063219 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:17:15.065640 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:17:15.069553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:17:15.069780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:17:15.070646 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1362) Oct 9 07:17:15.075490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:17:15.076599 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:17:15.076636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:17:15.077375 systemd[1]: Finished ensure-sysext.service. Oct 9 07:17:15.079713 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:17:15.081455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:17:15.081652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:17:15.083430 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:17:15.083618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:17:15.144704 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:17:15.152662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:17:15.152859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:17:15.159195 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1366) Oct 9 07:17:15.157443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:17:15.159759 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:17:15.162868 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:17:15.163095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:17:15.166431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:17:15.192820 systemd-resolved[1327]: Positive Trust Anchors: Oct 9 07:17:15.192840 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:17:15.192872 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:17:15.200149 systemd-resolved[1327]: Defaulting to hostname 'linux'. Oct 9 07:17:15.204932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:17:15.206274 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:17:15.221853 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:17:15.252443 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 9 07:17:15.250757 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:17:15.259302 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:17:15.263365 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:17:15.264752 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:17:15.273314 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 07:17:15.274515 systemd-networkd[1396]: lo: Link UP Oct 9 07:17:15.274529 systemd-networkd[1396]: lo: Gained carrier Oct 9 07:17:15.279665 systemd-networkd[1396]: Enumeration completed Oct 9 07:17:15.279767 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:17:15.280125 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:17:15.280137 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:17:15.281103 systemd[1]: Reached target network.target - Network. Oct 9 07:17:15.282457 systemd-networkd[1396]: eth0: Link UP Oct 9 07:17:15.282470 systemd-networkd[1396]: eth0: Gained carrier Oct 9 07:17:15.282483 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:17:15.288574 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 07:17:15.288814 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 07:17:15.288999 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 07:17:15.290673 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:17:15.292262 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:17:15.298335 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 07:17:15.299687 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 9 07:17:16.491626 systemd-resolved[1327]: Clock change detected. Flushing caches. Oct 9 07:17:16.491803 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 07:17:16.491871 systemd-timesyncd[1407]: Initial clock synchronization to Wed 2024-10-09 07:17:16.491537 UTC. Oct 9 07:17:16.525792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:17:16.582602 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:17:16.597877 kernel: kvm_amd: TSC scaling supported Oct 9 07:17:16.597907 kernel: kvm_amd: Nested Virtualization enabled Oct 9 07:17:16.597937 kernel: kvm_amd: Nested Paging enabled Oct 9 07:17:16.597965 kernel: kvm_amd: LBR virtualization supported Oct 9 07:17:16.599014 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 07:17:16.599035 kernel: kvm_amd: Virtual GIF supported Oct 9 07:17:16.621612 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:17:16.667397 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:17:16.677190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:17:16.694754 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:17:16.706189 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:17:16.741238 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:17:16.742870 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:17:16.744274 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:17:16.745625 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:17:16.746917 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:17:16.748412 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:17:16.749641 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:17:16.750888 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:17:16.752124 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:17:16.752156 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:17:16.753068 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:17:16.754765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:17:16.757564 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:17:16.770315 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:17:16.772722 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:17:16.774302 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:17:16.775459 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:17:16.776428 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:17:16.777387 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:17:16.777415 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:17:16.778447 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:17:16.780541 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:17:16.782713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:17:16.786847 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:17:16.786706 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:17:16.787907 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:17:16.791388 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:17:16.794750 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:17:16.800086 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:17:16.800273 jq[1435]: false Oct 9 07:17:16.802825 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:17:16.807732 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:17:16.809356 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:17:16.809834 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:17:16.810686 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:17:16.814716 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:17:16.817672 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:17:16.819459 extend-filesystems[1436]: Found loop3 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found loop4 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found loop5 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found sr0 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda1 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda2 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda3 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found usr Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda4 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda6 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda7 Oct 9 07:17:16.819459 extend-filesystems[1436]: Found vda9 Oct 9 07:17:16.819459 extend-filesystems[1436]: Checking size of /dev/vda9 Oct 9 07:17:16.820473 dbus-daemon[1434]: [system] SELinux support is enabled Oct 9 07:17:16.821560 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:17:16.855256 update_engine[1444]: I1009 07:17:16.834686 1444 main.cc:92] Flatcar Update Engine starting Oct 9 07:17:16.855256 update_engine[1444]: I1009 07:17:16.836586 1444 update_check_scheduler.cc:74] Next update check in 6m14s Oct 9 07:17:16.828829 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:17:16.862903 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1366) Oct 9 07:17:16.829064 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:17:16.863091 jq[1446]: true Oct 9 07:17:16.836396 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:17:16.839679 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:17:16.855842 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:17:16.855868 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:17:16.858369 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:17:16.864361 extend-filesystems[1436]: Resized partition /dev/vda9 Oct 9 07:17:16.858386 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:17:16.861753 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:17:16.866955 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:17:16.871183 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:17:16.871502 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:17:16.892929 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:17:16.935263 tar[1453]: linux-amd64/helm Oct 9 07:17:16.942244 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:17:16.944978 extend-filesystems[1466]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:17:16.950445 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 07:17:16.961551 jq[1463]: true Oct 9 07:17:16.985791 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 07:17:16.986969 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:17:16.998907 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:17:17.003827 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:17:17.011565 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:17:17.011823 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:17:17.011924 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:17:17.011949 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:17:17.013690 systemd-logind[1442]: New seat seat0. Oct 9 07:17:17.015402 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:17:17.015402 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 07:17:17.015402 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 07:17:17.020524 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:17:17.045304 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:17:17.045443 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Oct 9 07:17:17.020788 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:17:17.023876 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:17:17.046733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:17:17.050199 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 07:17:17.061856 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:17:17.080970 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:17:17.089047 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:17:17.092411 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:17:17.093778 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:17:17.397129 containerd[1459]: time="2024-10-09T07:17:17.396905092Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:17:17.426830 containerd[1459]: time="2024-10-09T07:17:17.426461340Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:17:17.426830 containerd[1459]: time="2024-10-09T07:17:17.426537493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:17:17.428875 containerd[1459]: time="2024-10-09T07:17:17.428819722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:17:17.428875 containerd[1459]: time="2024-10-09T07:17:17.428860348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429141 containerd[1459]: time="2024-10-09T07:17:17.429114044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429141 containerd[1459]: time="2024-10-09T07:17:17.429136606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:17:17.429294 containerd[1459]: time="2024-10-09T07:17:17.429275566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429375 containerd[1459]: time="2024-10-09T07:17:17.429357029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429407 containerd[1459]: time="2024-10-09T07:17:17.429373200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429495 containerd[1459]: time="2024-10-09T07:17:17.429477495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429805 containerd[1459]: time="2024-10-09T07:17:17.429777307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429805 containerd[1459]: time="2024-10-09T07:17:17.429801002Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:17:17.429846 containerd[1459]: time="2024-10-09T07:17:17.429811551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429959 containerd[1459]: time="2024-10-09T07:17:17.429932258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:17:17.429959 containerd[1459]: time="2024-10-09T07:17:17.429950923Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:17:17.430045 containerd[1459]: time="2024-10-09T07:17:17.430021385Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:17:17.430045 containerd[1459]: time="2024-10-09T07:17:17.430038838Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:17:17.435357 containerd[1459]: time="2024-10-09T07:17:17.435308258Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:17:17.435398 containerd[1459]: time="2024-10-09T07:17:17.435366988Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:17:17.435398 containerd[1459]: time="2024-10-09T07:17:17.435383359Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:17:17.435508 containerd[1459]: time="2024-10-09T07:17:17.435433753Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:17:17.435508 containerd[1459]: time="2024-10-09T07:17:17.435455283Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:17:17.435508 containerd[1459]: time="2024-10-09T07:17:17.435474289Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:17:17.435508 containerd[1459]: time="2024-10-09T07:17:17.435491672Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:17:17.435738 containerd[1459]: time="2024-10-09T07:17:17.435710682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:17:17.435738 containerd[1459]: time="2024-10-09T07:17:17.435734858Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:17:17.435785 containerd[1459]: time="2024-10-09T07:17:17.435749455Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:17:17.435785 containerd[1459]: time="2024-10-09T07:17:17.435768090Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:17:17.435822 containerd[1459]: time="2024-10-09T07:17:17.435796403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.435822 containerd[1459]: time="2024-10-09T07:17:17.435816901Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.435870 containerd[1459]: time="2024-10-09T07:17:17.435830868Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.435870 containerd[1459]: time="2024-10-09T07:17:17.435844694Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.435870 containerd[1459]: time="2024-10-09T07:17:17.435859371Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.435923 containerd[1459]: time="2024-10-09T07:17:17.435876022Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.435923 containerd[1459]: time="2024-10-09T07:17:17.435889558Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.435923 containerd[1459]: time="2024-10-09T07:17:17.435902392Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:17:17.436072 containerd[1459]: time="2024-10-09T07:17:17.436036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:17:17.436412 containerd[1459]: time="2024-10-09T07:17:17.436385828Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:17:17.436446 containerd[1459]: time="2024-10-09T07:17:17.436433668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436484 containerd[1459]: time="2024-10-09T07:17:17.436451772Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:17:17.436514 containerd[1459]: time="2024-10-09T07:17:17.436492438Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:17:17.436621 containerd[1459]: time="2024-10-09T07:17:17.436602645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436674 containerd[1459]: time="2024-10-09T07:17:17.436640065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436674 containerd[1459]: time="2024-10-09T07:17:17.436660443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436714 containerd[1459]: time="2024-10-09T07:17:17.436673768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436714 containerd[1459]: time="2024-10-09T07:17:17.436688566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436714 containerd[1459]: time="2024-10-09T07:17:17.436702221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436783 containerd[1459]: time="2024-10-09T07:17:17.436714394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436783 containerd[1459]: time="2024-10-09T07:17:17.436727329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436783 containerd[1459]: time="2024-10-09T07:17:17.436740994Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:17:17.436987 containerd[1459]: time="2024-10-09T07:17:17.436954064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.436987 containerd[1459]: time="2024-10-09T07:17:17.436979000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.437035 containerd[1459]: time="2024-10-09T07:17:17.436992155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.437035 containerd[1459]: time="2024-10-09T07:17:17.437005700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.437035 containerd[1459]: time="2024-10-09T07:17:17.437018845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.437035 containerd[1459]: time="2024-10-09T07:17:17.437031268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.437117 containerd[1459]: time="2024-10-09T07:17:17.437043942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.437117 containerd[1459]: time="2024-10-09T07:17:17.437055073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:17:17.437480 containerd[1459]: time="2024-10-09T07:17:17.437419817Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:17:17.437728 containerd[1459]: time="2024-10-09T07:17:17.437490369Z" level=info msg="Connect containerd service" Oct 9 07:17:17.437728 containerd[1459]: time="2024-10-09T07:17:17.437525605Z" level=info msg="using legacy CRI server" Oct 9 07:17:17.437728 containerd[1459]: time="2024-10-09T07:17:17.437533300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:17:17.437728 containerd[1459]: time="2024-10-09T07:17:17.437660348Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:17:17.438413 containerd[1459]: time="2024-10-09T07:17:17.438376761Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:17:17.438460 containerd[1459]: time="2024-10-09T07:17:17.438444498Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:17:17.438482 containerd[1459]: time="2024-10-09T07:17:17.438467632Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:17:17.438482 containerd[1459]: time="2024-10-09T07:17:17.438479644Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:17:17.438521 containerd[1459]: time="2024-10-09T07:17:17.438493600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:17:17.438842 containerd[1459]: time="2024-10-09T07:17:17.438664651Z" level=info msg="Start subscribing containerd event" Oct 9 07:17:17.438842 containerd[1459]: time="2024-10-09T07:17:17.438752165Z" level=info msg="Start recovering state" Oct 9 07:17:17.438842 containerd[1459]: time="2024-10-09T07:17:17.438824651Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:17:17.438924 containerd[1459]: time="2024-10-09T07:17:17.438830262Z" level=info msg="Start event monitor" Oct 9 07:17:17.438924 containerd[1459]: time="2024-10-09T07:17:17.438878011Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:17:17.438924 containerd[1459]: time="2024-10-09T07:17:17.438884864Z" level=info msg="Start snapshots syncer" Oct 9 07:17:17.438924 containerd[1459]: time="2024-10-09T07:17:17.438916543Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:17:17.438994 containerd[1459]: time="2024-10-09T07:17:17.438929638Z" level=info msg="Start streaming server" Oct 9 07:17:17.439267 containerd[1459]: time="2024-10-09T07:17:17.439035306Z" level=info msg="containerd successfully booted in 0.044163s" Oct 9 07:17:17.439126 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:17:17.507293 tar[1453]: linux-amd64/LICENSE Oct 9 07:17:17.507413 tar[1453]: linux-amd64/README.md Oct 9 07:17:17.523522 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:17:17.977917 systemd-networkd[1396]: eth0: Gained IPv6LL Oct 9 07:17:17.982265 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:17:17.984150 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:17:17.994851 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 07:17:17.997419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:17.999613 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:17:18.021895 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 07:17:18.022340 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 07:17:18.024237 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:17:18.026617 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:17:19.093504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:19.095118 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:17:19.096456 systemd[1]: Startup finished in 1.217s (kernel) + 5.371s (initrd) + 4.861s (userspace) = 11.449s. Oct 9 07:17:19.124003 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:17:19.711739 kubelet[1548]: E1009 07:17:19.711659 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:17:19.716283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:17:19.716518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:17:19.717029 systemd[1]: kubelet.service: Consumed 1.591s CPU time. Oct 9 07:17:26.756570 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:17:26.758068 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:35346.service - OpenSSH per-connection server daemon (10.0.0.1:35346). Oct 9 07:17:26.807618 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 35346 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:17:26.809809 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:26.818605 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:17:26.827847 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:17:26.829837 systemd-logind[1442]: New session 1 of user core. Oct 9 07:17:26.843085 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:17:26.845244 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:17:26.854351 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:26.969718 systemd[1565]: Queued start job for default target default.target. Oct 9 07:17:26.979919 systemd[1565]: Created slice app.slice - User Application Slice. Oct 9 07:17:26.979954 systemd[1565]: Reached target paths.target - Paths. Oct 9 07:17:26.979969 systemd[1565]: Reached target timers.target - Timers. Oct 9 07:17:26.981564 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:17:26.994151 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:17:26.994290 systemd[1565]: Reached target sockets.target - Sockets. Oct 9 07:17:26.994310 systemd[1565]: Reached target basic.target - Basic System. Oct 9 07:17:26.994350 systemd[1565]: Reached target default.target - Main User Target. Oct 9 07:17:26.994387 systemd[1565]: Startup finished in 133ms. Oct 9 07:17:26.994950 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:17:26.996635 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:17:27.057455 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:52826.service - OpenSSH per-connection server daemon (10.0.0.1:52826). Oct 9 07:17:27.092939 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 52826 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:17:27.094433 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:27.098562 systemd-logind[1442]: New session 2 of user core. Oct 9 07:17:27.112711 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:17:27.167475 sshd[1576]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:27.178297 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:52826.service: Deactivated successfully. Oct 9 07:17:27.180008 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:17:27.181409 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:17:27.192949 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:52838.service - OpenSSH per-connection server daemon (10.0.0.1:52838). Oct 9 07:17:27.193923 systemd-logind[1442]: Removed session 2. Oct 9 07:17:27.219609 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 52838 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:17:27.221131 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:27.224972 systemd-logind[1442]: New session 3 of user core. Oct 9 07:17:27.238723 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:17:27.287995 sshd[1583]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:27.301517 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:52838.service: Deactivated successfully. Oct 9 07:17:27.303548 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:17:27.305002 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:17:27.314004 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:52844.service - OpenSSH per-connection server daemon (10.0.0.1:52844). Oct 9 07:17:27.315016 systemd-logind[1442]: Removed session 3. Oct 9 07:17:27.339437 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:17:27.341007 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:27.344901 systemd-logind[1442]: New session 4 of user core. Oct 9 07:17:27.354715 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:17:27.408707 sshd[1590]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:27.423276 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:52844.service: Deactivated successfully. Oct 9 07:17:27.425239 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:17:27.427063 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:17:27.441969 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:52848.service - OpenSSH per-connection server daemon (10.0.0.1:52848). Oct 9 07:17:27.442864 systemd-logind[1442]: Removed session 4. Oct 9 07:17:27.467935 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 52848 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:17:27.469478 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:27.473305 systemd-logind[1442]: New session 5 of user core. Oct 9 07:17:27.489700 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:17:27.547914 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:17:27.548254 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:27.570490 sudo[1601]: pam_unix(sudo:session): session closed for user root Oct 9 07:17:27.572305 sshd[1597]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:27.583538 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:52848.service: Deactivated successfully. Oct 9 07:17:27.585513 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:17:27.586974 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:17:27.597909 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:52862.service - OpenSSH per-connection server daemon (10.0.0.1:52862). Oct 9 07:17:27.598913 systemd-logind[1442]: Removed session 5. Oct 9 07:17:27.624028 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:17:27.625621 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:27.629169 systemd-logind[1442]: New session 6 of user core. Oct 9 07:17:27.643693 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:17:27.698187 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:17:27.698533 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:27.702411 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 9 07:17:27.708668 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:17:27.708982 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:27.730803 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:17:27.732557 auditctl[1613]: No rules Oct 9 07:17:27.734143 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:17:27.734420 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:17:27.736255 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:17:27.768698 augenrules[1631]: No rules Oct 9 07:17:27.770672 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:17:27.772182 sudo[1609]: pam_unix(sudo:session): session closed for user root Oct 9 07:17:27.774026 sshd[1606]: pam_unix(sshd:session): session closed for user core Oct 9 07:17:27.785620 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:52862.service: Deactivated successfully. Oct 9 07:17:27.787675 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:17:27.789477 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:17:27.800825 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:52868.service - OpenSSH per-connection server daemon (10.0.0.1:52868). Oct 9 07:17:27.801713 systemd-logind[1442]: Removed session 6. Oct 9 07:17:27.826044 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 52868 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:17:27.827418 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:17:27.831089 systemd-logind[1442]: New session 7 of user core. Oct 9 07:17:27.840699 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:17:27.893936 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:17:27.894246 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:17:28.007867 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:17:28.008011 (dockerd)[1653]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:17:28.665018 dockerd[1653]: time="2024-10-09T07:17:28.664903205Z" level=info msg="Starting up" Oct 9 07:17:28.813870 systemd[1]: var-lib-docker-metacopy\x2dcheck1754963227-merged.mount: Deactivated successfully. Oct 9 07:17:28.837509 dockerd[1653]: time="2024-10-09T07:17:28.837460640Z" level=info msg="Loading containers: start." Oct 9 07:17:28.958620 kernel: Initializing XFRM netlink socket Oct 9 07:17:29.043915 systemd-networkd[1396]: docker0: Link UP Oct 9 07:17:29.067342 dockerd[1653]: time="2024-10-09T07:17:29.067295491Z" level=info msg="Loading containers: done." Oct 9 07:17:29.202626 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1303205251-merged.mount: Deactivated successfully. Oct 9 07:17:29.205423 dockerd[1653]: time="2024-10-09T07:17:29.205385980Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:17:29.205649 dockerd[1653]: time="2024-10-09T07:17:29.205628836Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:17:29.205778 dockerd[1653]: time="2024-10-09T07:17:29.205760473Z" level=info msg="Daemon has completed initialization" Oct 9 07:17:29.241425 dockerd[1653]: time="2024-10-09T07:17:29.241333161Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:17:29.241543 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:17:29.852995 containerd[1459]: time="2024-10-09T07:17:29.852941126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 9 07:17:29.966730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:17:29.975750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:30.170058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:30.175840 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:17:30.229968 kubelet[1796]: E1009 07:17:30.229891 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:17:30.236695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:17:30.236953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:17:30.770603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188456854.mount: Deactivated successfully. Oct 9 07:17:32.034120 containerd[1459]: time="2024-10-09T07:17:32.034057308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:32.034724 containerd[1459]: time="2024-10-09T07:17:32.034675147Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=28066621" Oct 9 07:17:32.035983 containerd[1459]: time="2024-10-09T07:17:32.035950658Z" level=info msg="ImageCreate event name:\"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:32.039064 containerd[1459]: time="2024-10-09T07:17:32.039010736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:32.040459 containerd[1459]: time="2024-10-09T07:17:32.040427753Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"28063421\" in 2.187421254s" Oct 9 07:17:32.040512 containerd[1459]: time="2024-10-09T07:17:32.040462468Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3\"" Oct 9 07:17:32.042460 containerd[1459]: time="2024-10-09T07:17:32.042386035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 9 07:17:33.623503 containerd[1459]: time="2024-10-09T07:17:33.623424201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:33.624324 containerd[1459]: time="2024-10-09T07:17:33.624245771Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=24690922" Oct 9 07:17:33.625599 containerd[1459]: time="2024-10-09T07:17:33.625554565Z" level=info msg="ImageCreate event name:\"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:33.628874 containerd[1459]: time="2024-10-09T07:17:33.628819237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:33.629802 containerd[1459]: time="2024-10-09T07:17:33.629752446Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"26240868\" in 1.587324413s" Oct 9 07:17:33.629802 containerd[1459]: time="2024-10-09T07:17:33.629791650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1\"" Oct 9 07:17:33.630376 containerd[1459]: time="2024-10-09T07:17:33.630342763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 9 07:17:34.973605 containerd[1459]: time="2024-10-09T07:17:34.973508118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:34.974232 containerd[1459]: time="2024-10-09T07:17:34.974179807Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=18646758" Oct 9 07:17:34.975365 containerd[1459]: time="2024-10-09T07:17:34.975322680Z" level=info msg="ImageCreate event name:\"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:34.978352 containerd[1459]: time="2024-10-09T07:17:34.978312236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:34.979671 containerd[1459]: time="2024-10-09T07:17:34.979628614Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"20196722\" in 1.349244483s" Oct 9 07:17:34.979748 containerd[1459]: time="2024-10-09T07:17:34.979670021Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94\"" Oct 9 07:17:34.980313 containerd[1459]: time="2024-10-09T07:17:34.980272201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 9 07:17:36.306509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344168722.mount: Deactivated successfully. Oct 9 07:17:37.059478 containerd[1459]: time="2024-10-09T07:17:37.059416506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:37.060289 containerd[1459]: time="2024-10-09T07:17:37.060242665Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=30208881" Oct 9 07:17:37.061445 containerd[1459]: time="2024-10-09T07:17:37.061410564Z" level=info msg="ImageCreate event name:\"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:37.063628 containerd[1459]: time="2024-10-09T07:17:37.063594249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:37.064494 containerd[1459]: time="2024-10-09T07:17:37.064462827Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"30207900\" in 2.08415002s" Oct 9 07:17:37.064538 containerd[1459]: time="2024-10-09T07:17:37.064498524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494\"" Oct 9 07:17:37.065031 containerd[1459]: time="2024-10-09T07:17:37.065004693Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:17:37.654233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1086859808.mount: Deactivated successfully. Oct 9 07:17:38.588899 containerd[1459]: time="2024-10-09T07:17:38.588835756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:38.589552 containerd[1459]: time="2024-10-09T07:17:38.589486997Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:17:38.590910 containerd[1459]: time="2024-10-09T07:17:38.590871423Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:38.594011 containerd[1459]: time="2024-10-09T07:17:38.593950967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:38.595152 containerd[1459]: time="2024-10-09T07:17:38.595093079Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.530039724s" Oct 9 07:17:38.595216 containerd[1459]: time="2024-10-09T07:17:38.595155957Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:17:38.595801 containerd[1459]: time="2024-10-09T07:17:38.595762594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 9 07:17:40.021190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294488539.mount: Deactivated successfully. Oct 9 07:17:40.026932 containerd[1459]: time="2024-10-09T07:17:40.026864820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:40.027605 containerd[1459]: time="2024-10-09T07:17:40.027529246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 9 07:17:40.028825 containerd[1459]: time="2024-10-09T07:17:40.028791182Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:40.030988 containerd[1459]: time="2024-10-09T07:17:40.030950110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:40.031756 containerd[1459]: time="2024-10-09T07:17:40.031714704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.435920671s" Oct 9 07:17:40.031800 containerd[1459]: time="2024-10-09T07:17:40.031757745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 9 07:17:40.032210 containerd[1459]: time="2024-10-09T07:17:40.032157715Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 9 07:17:40.487210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:17:40.496759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:40.648826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:40.653404 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:17:40.768367 kubelet[1929]: E1009 07:17:40.768163 1929 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:17:40.772591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:17:40.772822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:17:40.968963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417412225.mount: Deactivated successfully. Oct 9 07:17:43.370330 containerd[1459]: time="2024-10-09T07:17:43.370231482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:43.371595 containerd[1459]: time="2024-10-09T07:17:43.371514427Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56241740" Oct 9 07:17:43.373050 containerd[1459]: time="2024-10-09T07:17:43.373015712Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:43.376931 containerd[1459]: time="2024-10-09T07:17:43.376900296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:17:43.378408 containerd[1459]: time="2024-10-09T07:17:43.378371174Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.346174166s" Oct 9 07:17:43.378481 containerd[1459]: time="2024-10-09T07:17:43.378411560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Oct 9 07:17:45.804288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:45.813883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:45.841656 systemd[1]: Reloading requested from client PID 2021 ('systemctl') (unit session-7.scope)... Oct 9 07:17:45.841675 systemd[1]: Reloading... Oct 9 07:17:45.940604 zram_generator::config[2058]: No configuration found. Oct 9 07:17:46.376974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:17:46.482078 systemd[1]: Reloading finished in 639 ms. Oct 9 07:17:46.546345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:46.550446 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:46.551920 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:17:46.552194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:46.566804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:46.719094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:46.740923 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:17:47.277644 kubelet[2108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:47.277644 kubelet[2108]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:17:47.277644 kubelet[2108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:47.283909 kubelet[2108]: I1009 07:17:47.283833 2108 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:17:47.822894 kubelet[2108]: I1009 07:17:47.822827 2108 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:17:47.822894 kubelet[2108]: I1009 07:17:47.822881 2108 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:17:47.823222 kubelet[2108]: I1009 07:17:47.823195 2108 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:17:47.866892 kubelet[2108]: I1009 07:17:47.866837 2108 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:17:47.867812 kubelet[2108]: E1009 07:17:47.867779 2108 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:47.957746 kubelet[2108]: E1009 07:17:47.957701 2108 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:17:47.957746 kubelet[2108]: I1009 07:17:47.957737 2108 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:17:47.973674 kubelet[2108]: I1009 07:17:47.973623 2108 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:17:47.973813 kubelet[2108]: I1009 07:17:47.973772 2108 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:17:47.973976 kubelet[2108]: I1009 07:17:47.973924 2108 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:17:47.974197 kubelet[2108]: I1009 07:17:47.973967 2108 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:17:47.974314 kubelet[2108]: I1009 07:17:47.974204 2108 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:17:47.974314 kubelet[2108]: I1009 07:17:47.974214 2108 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:17:47.974368 kubelet[2108]: I1009 07:17:47.974359 2108 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:47.978126 kubelet[2108]: I1009 07:17:47.978084 2108 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:17:47.978126 kubelet[2108]: I1009 07:17:47.978116 2108 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:17:47.978213 kubelet[2108]: I1009 07:17:47.978163 2108 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:17:47.978213 kubelet[2108]: I1009 07:17:47.978185 2108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:17:47.984658 kubelet[2108]: I1009 07:17:47.984319 2108 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:17:47.984658 kubelet[2108]: W1009 07:17:47.984606 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:47.984802 kubelet[2108]: E1009 07:17:47.984674 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:47.984802 kubelet[2108]: W1009 07:17:47.984739 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:47.984802 kubelet[2108]: E1009 07:17:47.984774 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:47.986784 kubelet[2108]: I1009 07:17:47.986765 2108 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:17:47.987518 kubelet[2108]: W1009 07:17:47.987487 2108 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:17:47.988713 kubelet[2108]: I1009 07:17:47.988328 2108 server.go:1269] "Started kubelet" Oct 9 07:17:47.989438 kubelet[2108]: I1009 07:17:47.989172 2108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:17:47.989438 kubelet[2108]: I1009 07:17:47.989322 2108 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:17:47.990694 kubelet[2108]: I1009 07:17:47.989907 2108 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:17:47.990694 kubelet[2108]: I1009 07:17:47.990482 2108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:17:47.992092 kubelet[2108]: I1009 07:17:47.992054 2108 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:17:47.994623 kubelet[2108]: I1009 07:17:47.993005 2108 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:17:47.994623 kubelet[2108]: I1009 07:17:47.993624 2108 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:17:47.994623 kubelet[2108]: I1009 07:17:47.993748 2108 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:17:47.994623 kubelet[2108]: I1009 07:17:47.993816 2108 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:17:47.994623 kubelet[2108]: W1009 07:17:47.994141 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:47.994623 kubelet[2108]: E1009 07:17:47.994185 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:47.994623 kubelet[2108]: E1009 07:17:47.994421 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:47.994623 kubelet[2108]: E1009 07:17:47.994495 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Oct 9 07:17:47.995106 kubelet[2108]: E1009 07:17:47.994999 2108 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:17:47.996316 kubelet[2108]: I1009 07:17:47.996279 2108 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:17:47.996316 kubelet[2108]: I1009 07:17:47.996307 2108 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:17:47.996441 kubelet[2108]: I1009 07:17:47.996420 2108 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:17:48.000206 kubelet[2108]: E1009 07:17:47.996328 2108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fcb7a3108784d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:17:47.988296913 +0000 UTC m=+0.885372454,LastTimestamp:2024-10-09 07:17:47.988296913 +0000 UTC m=+0.885372454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:17:48.013630 kubelet[2108]: I1009 07:17:48.013416 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:17:48.015933 kubelet[2108]: I1009 07:17:48.015500 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:17:48.015933 kubelet[2108]: I1009 07:17:48.015545 2108 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:17:48.015933 kubelet[2108]: I1009 07:17:48.015585 2108 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:17:48.015933 kubelet[2108]: E1009 07:17:48.015636 2108 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:17:48.022143 kubelet[2108]: W1009 07:17:48.022000 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:48.022143 kubelet[2108]: E1009 07:17:48.022059 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:48.026526 kubelet[2108]: I1009 07:17:48.026485 2108 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:17:48.026526 kubelet[2108]: I1009 07:17:48.026513 2108 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:17:48.026654 kubelet[2108]: I1009 07:17:48.026540 2108 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:48.029520 kubelet[2108]: I1009 07:17:48.029485 2108 policy_none.go:49] "None policy: Start" Oct 9 07:17:48.030259 kubelet[2108]: I1009 07:17:48.030231 2108 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:17:48.030259 kubelet[2108]: I1009 07:17:48.030257 2108 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:17:48.037249 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:17:48.052184 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:17:48.066914 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:17:48.068993 kubelet[2108]: I1009 07:17:48.068944 2108 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:17:48.069396 kubelet[2108]: I1009 07:17:48.069301 2108 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:17:48.069470 kubelet[2108]: I1009 07:17:48.069375 2108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:17:48.070477 kubelet[2108]: I1009 07:17:48.070089 2108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:17:48.071795 kubelet[2108]: E1009 07:17:48.071768 2108 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 07:17:48.126259 systemd[1]: Created slice kubepods-burstable-pode312a2243b5148d05ecb06afc711a15e.slice - libcontainer container kubepods-burstable-pode312a2243b5148d05ecb06afc711a15e.slice. Oct 9 07:17:48.148938 systemd[1]: Created slice kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice - libcontainer container kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice. Oct 9 07:17:48.153362 systemd[1]: Created slice kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice - libcontainer container kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice. Oct 9 07:17:48.171567 kubelet[2108]: I1009 07:17:48.171532 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 07:17:48.172063 kubelet[2108]: E1009 07:17:48.172007 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 9 07:17:48.195755 kubelet[2108]: E1009 07:17:48.195694 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Oct 9 07:17:48.296193 kubelet[2108]: I1009 07:17:48.296112 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:48.296193 kubelet[2108]: I1009 07:17:48.296172 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:48.296193 kubelet[2108]: I1009 07:17:48.296200 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e312a2243b5148d05ecb06afc711a15e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e312a2243b5148d05ecb06afc711a15e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:17:48.296837 kubelet[2108]: I1009 07:17:48.296218 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:48.296837 kubelet[2108]: I1009 07:17:48.296292 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:48.296837 kubelet[2108]: I1009 07:17:48.296366 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:17:48.296837 kubelet[2108]: I1009 07:17:48.296388 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e312a2243b5148d05ecb06afc711a15e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e312a2243b5148d05ecb06afc711a15e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:17:48.296837 kubelet[2108]: I1009 07:17:48.296406 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e312a2243b5148d05ecb06afc711a15e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e312a2243b5148d05ecb06afc711a15e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:17:48.296957 kubelet[2108]: I1009 07:17:48.296430 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:48.374054 kubelet[2108]: I1009 07:17:48.374002 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 07:17:48.374473 kubelet[2108]: E1009 07:17:48.374436 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 9 07:17:48.447433 kubelet[2108]: E1009 07:17:48.447238 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:48.448317 containerd[1459]: time="2024-10-09T07:17:48.448274972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e312a2243b5148d05ecb06afc711a15e,Namespace:kube-system,Attempt:0,}" Oct 9 07:17:48.452429 kubelet[2108]: E1009 07:17:48.452402 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:48.452998 containerd[1459]: time="2024-10-09T07:17:48.452935140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,}" Oct 9 07:17:48.456193 kubelet[2108]: E1009 07:17:48.456148 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:48.456510 containerd[1459]: time="2024-10-09T07:17:48.456476219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,}" Oct 9 07:17:48.597057 kubelet[2108]: E1009 07:17:48.597009 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Oct 9 07:17:48.776174 kubelet[2108]: I1009 07:17:48.776038 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 07:17:48.776545 kubelet[2108]: E1009 07:17:48.776461 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 9 07:17:49.005507 kubelet[2108]: W1009 07:17:49.005427 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:49.005507 kubelet[2108]: E1009 07:17:49.005504 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:49.089591 kubelet[2108]: W1009 07:17:49.089413 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:49.089591 kubelet[2108]: E1009 07:17:49.089487 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:49.138506 kubelet[2108]: W1009 07:17:49.138435 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:49.138506 kubelet[2108]: E1009 07:17:49.138506 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:49.398495 kubelet[2108]: E1009 07:17:49.398363 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="1.6s" Oct 9 07:17:49.522216 kubelet[2108]: W1009 07:17:49.522142 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:49.522216 kubelet[2108]: E1009 07:17:49.522198 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:49.578016 kubelet[2108]: I1009 07:17:49.577975 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 07:17:49.578343 kubelet[2108]: E1009 07:17:49.578284 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Oct 9 07:17:49.904953 kubelet[2108]: E1009 07:17:49.904902 2108 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:50.017046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907104334.mount: Deactivated successfully. Oct 9 07:17:50.023080 containerd[1459]: time="2024-10-09T07:17:50.023030538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:50.024007 containerd[1459]: time="2024-10-09T07:17:50.023975467Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:50.024904 containerd[1459]: time="2024-10-09T07:17:50.024863928Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:50.028626 containerd[1459]: time="2024-10-09T07:17:50.028588981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:17:50.029241 containerd[1459]: time="2024-10-09T07:17:50.029186321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:17:50.030147 containerd[1459]: time="2024-10-09T07:17:50.030111452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:17:50.031142 containerd[1459]: time="2024-10-09T07:17:50.031103522Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:50.035961 containerd[1459]: time="2024-10-09T07:17:50.035922191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:17:50.037029 containerd[1459]: time="2024-10-09T07:17:50.036988013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.588545155s" Oct 9 07:17:50.041139 containerd[1459]: time="2024-10-09T07:17:50.041100261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.588055865s" Oct 9 07:17:50.041906 containerd[1459]: time="2024-10-09T07:17:50.041878720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.585322981s" Oct 9 07:17:50.376933 containerd[1459]: time="2024-10-09T07:17:50.376686908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:50.376933 containerd[1459]: time="2024-10-09T07:17:50.376748918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:50.376933 containerd[1459]: time="2024-10-09T07:17:50.376763015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:50.376933 containerd[1459]: time="2024-10-09T07:17:50.376772053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:50.380027 containerd[1459]: time="2024-10-09T07:17:50.379638342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:50.380027 containerd[1459]: time="2024-10-09T07:17:50.379711602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:50.380027 containerd[1459]: time="2024-10-09T07:17:50.379732082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:50.380027 containerd[1459]: time="2024-10-09T07:17:50.379746770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:50.447941 systemd[1]: Started cri-containerd-f40f2e5182f6e66e2512cbe254533840e17e50b95c71234bb5937f3b3da68c07.scope - libcontainer container f40f2e5182f6e66e2512cbe254533840e17e50b95c71234bb5937f3b3da68c07. Oct 9 07:17:50.448663 containerd[1459]: time="2024-10-09T07:17:50.448451931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:17:50.448663 containerd[1459]: time="2024-10-09T07:17:50.448591519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:50.448726 containerd[1459]: time="2024-10-09T07:17:50.448643410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:17:50.448726 containerd[1459]: time="2024-10-09T07:17:50.448668097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:17:50.472729 systemd[1]: Started cri-containerd-19355c5709a51ed5d3b3f8c10fd34de42f2e6a80935324af92f8214777384c08.scope - libcontainer container 19355c5709a51ed5d3b3f8c10fd34de42f2e6a80935324af92f8214777384c08. Oct 9 07:17:50.515814 systemd[1]: Started cri-containerd-93a47b0a7cc70268b492a61ad2c434aa28b45315df123f2c0c3b882d12810d4c.scope - libcontainer container 93a47b0a7cc70268b492a61ad2c434aa28b45315df123f2c0c3b882d12810d4c. Oct 9 07:17:50.539152 containerd[1459]: time="2024-10-09T07:17:50.539094583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f40f2e5182f6e66e2512cbe254533840e17e50b95c71234bb5937f3b3da68c07\"" Oct 9 07:17:50.540616 kubelet[2108]: E1009 07:17:50.540568 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:50.543761 containerd[1459]: time="2024-10-09T07:17:50.543599246Z" level=info msg="CreateContainer within sandbox \"f40f2e5182f6e66e2512cbe254533840e17e50b95c71234bb5937f3b3da68c07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:17:50.565433 containerd[1459]: time="2024-10-09T07:17:50.564939888Z" level=info msg="CreateContainer within sandbox \"f40f2e5182f6e66e2512cbe254533840e17e50b95c71234bb5937f3b3da68c07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14e4d508aeeea077ffdb88fea5c6d4c1817dcd5ef2384058fd36b028d63f50bb\"" Oct 9 07:17:50.565621 containerd[1459]: time="2024-10-09T07:17:50.565564500Z" level=info msg="StartContainer for \"14e4d508aeeea077ffdb88fea5c6d4c1817dcd5ef2384058fd36b028d63f50bb\"" Oct 9 07:17:50.568304 containerd[1459]: time="2024-10-09T07:17:50.568053312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e312a2243b5148d05ecb06afc711a15e,Namespace:kube-system,Attempt:0,} returns sandbox id \"19355c5709a51ed5d3b3f8c10fd34de42f2e6a80935324af92f8214777384c08\"" Oct 9 07:17:50.570691 kubelet[2108]: E1009 07:17:50.570254 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:50.572883 containerd[1459]: time="2024-10-09T07:17:50.572844187Z" level=info msg="CreateContainer within sandbox \"19355c5709a51ed5d3b3f8c10fd34de42f2e6a80935324af92f8214777384c08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:17:50.578331 containerd[1459]: time="2024-10-09T07:17:50.578296415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"93a47b0a7cc70268b492a61ad2c434aa28b45315df123f2c0c3b882d12810d4c\"" Oct 9 07:17:50.579486 kubelet[2108]: E1009 07:17:50.579447 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:50.582470 containerd[1459]: time="2024-10-09T07:17:50.582414926Z" level=info msg="CreateContainer within sandbox \"93a47b0a7cc70268b492a61ad2c434aa28b45315df123f2c0c3b882d12810d4c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:17:50.595993 containerd[1459]: time="2024-10-09T07:17:50.595932834Z" level=info msg="CreateContainer within sandbox \"19355c5709a51ed5d3b3f8c10fd34de42f2e6a80935324af92f8214777384c08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"20784a792fb13d253945de01b4de84b7e053b9a979b088b3d061e4f572672aa4\"" Oct 9 07:17:50.596974 containerd[1459]: time="2024-10-09T07:17:50.596923422Z" level=info msg="StartContainer for \"20784a792fb13d253945de01b4de84b7e053b9a979b088b3d061e4f572672aa4\"" Oct 9 07:17:50.617312 kubelet[2108]: W1009 07:17:50.617251 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Oct 9 07:17:50.617312 kubelet[2108]: E1009 07:17:50.617302 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Oct 9 07:17:50.688786 containerd[1459]: time="2024-10-09T07:17:50.688622106Z" level=info msg="CreateContainer within sandbox \"93a47b0a7cc70268b492a61ad2c434aa28b45315df123f2c0c3b882d12810d4c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3cd6622561c3752ae0429a92c1cf138e17c1c097beeeb0f99609d9de20d878e8\"" Oct 9 07:17:50.689331 containerd[1459]: time="2024-10-09T07:17:50.689283419Z" level=info msg="StartContainer for \"3cd6622561c3752ae0429a92c1cf138e17c1c097beeeb0f99609d9de20d878e8\"" Oct 9 07:17:50.689786 systemd[1]: Started cri-containerd-14e4d508aeeea077ffdb88fea5c6d4c1817dcd5ef2384058fd36b028d63f50bb.scope - libcontainer container 14e4d508aeeea077ffdb88fea5c6d4c1817dcd5ef2384058fd36b028d63f50bb. Oct 9 07:17:50.693240 systemd[1]: Started cri-containerd-20784a792fb13d253945de01b4de84b7e053b9a979b088b3d061e4f572672aa4.scope - libcontainer container 20784a792fb13d253945de01b4de84b7e053b9a979b088b3d061e4f572672aa4. Oct 9 07:17:50.728025 systemd[1]: Started cri-containerd-3cd6622561c3752ae0429a92c1cf138e17c1c097beeeb0f99609d9de20d878e8.scope - libcontainer container 3cd6622561c3752ae0429a92c1cf138e17c1c097beeeb0f99609d9de20d878e8. Oct 9 07:17:50.746611 containerd[1459]: time="2024-10-09T07:17:50.745711902Z" level=info msg="StartContainer for \"14e4d508aeeea077ffdb88fea5c6d4c1817dcd5ef2384058fd36b028d63f50bb\" returns successfully" Oct 9 07:17:50.746611 containerd[1459]: time="2024-10-09T07:17:50.745835570Z" level=info msg="StartContainer for \"20784a792fb13d253945de01b4de84b7e053b9a979b088b3d061e4f572672aa4\" returns successfully" Oct 9 07:17:50.780130 containerd[1459]: time="2024-10-09T07:17:50.780067112Z" level=info msg="StartContainer for \"3cd6622561c3752ae0429a92c1cf138e17c1c097beeeb0f99609d9de20d878e8\" returns successfully" Oct 9 07:17:51.055185 kubelet[2108]: E1009 07:17:51.055055 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:51.058651 kubelet[2108]: E1009 07:17:51.058624 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:51.059163 kubelet[2108]: E1009 07:17:51.059138 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:51.183122 kubelet[2108]: I1009 07:17:51.183063 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 07:17:52.068613 kubelet[2108]: E1009 07:17:52.066215 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:52.320919 kubelet[2108]: E1009 07:17:52.320769 2108 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 07:17:52.415225 kubelet[2108]: I1009 07:17:52.415169 2108 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 9 07:17:52.415225 kubelet[2108]: E1009 07:17:52.415218 2108 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 9 07:17:52.435414 kubelet[2108]: E1009 07:17:52.435355 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:52.450341 kubelet[2108]: E1009 07:17:52.450212 2108 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fcb7a3108784d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:17:47.988296913 +0000 UTC m=+0.885372454,LastTimestamp:2024-10-09 07:17:47.988296913 +0000 UTC m=+0.885372454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:17:52.536361 kubelet[2108]: E1009 07:17:52.536298 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:52.612603 kubelet[2108]: E1009 07:17:52.612335 2108 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fcb7a310eda509 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:17:47.994989833 +0000 UTC m=+0.892065374,LastTimestamp:2024-10-09 07:17:47.994989833 +0000 UTC m=+0.892065374,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:17:52.636765 kubelet[2108]: E1009 07:17:52.636712 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:52.666009 kubelet[2108]: E1009 07:17:52.665856 2108 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fcb7a312c4ecc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:17:48.025875654 +0000 UTC m=+0.922951195,LastTimestamp:2024-10-09 07:17:48.025875654 +0000 UTC m=+0.922951195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:17:52.718831 kubelet[2108]: E1009 07:17:52.718690 2108 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fcb7a312c50817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:17:48.025882647 +0000 UTC m=+0.922958188,LastTimestamp:2024-10-09 07:17:48.025882647 +0000 UTC m=+0.922958188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:17:52.737812 kubelet[2108]: E1009 07:17:52.737756 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:52.771829 kubelet[2108]: E1009 07:17:52.771695 2108 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fcb7a312c51623 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 07:17:48.025886243 +0000 UTC m=+0.922961784,LastTimestamp:2024-10-09 07:17:48.025886243 +0000 UTC m=+0.922961784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 07:17:52.837956 kubelet[2108]: E1009 07:17:52.837894 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:52.938542 kubelet[2108]: E1009 07:17:52.938403 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.038713 kubelet[2108]: E1009 07:17:53.038656 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.068099 kubelet[2108]: E1009 07:17:53.068065 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:53.139403 kubelet[2108]: E1009 07:17:53.139352 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.240082 kubelet[2108]: E1009 07:17:53.239940 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.340503 kubelet[2108]: E1009 07:17:53.340456 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.441271 kubelet[2108]: E1009 07:17:53.441211 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.542121 kubelet[2108]: E1009 07:17:53.541989 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.642704 kubelet[2108]: E1009 07:17:53.642640 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.743229 kubelet[2108]: E1009 07:17:53.743160 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.843694 kubelet[2108]: E1009 07:17:53.843553 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:53.944336 kubelet[2108]: E1009 07:17:53.944275 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.044449 kubelet[2108]: E1009 07:17:54.044399 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.145351 kubelet[2108]: E1009 07:17:54.145208 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.245800 kubelet[2108]: E1009 07:17:54.245742 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.346769 kubelet[2108]: E1009 07:17:54.346711 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.447531 kubelet[2108]: E1009 07:17:54.447373 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.548348 kubelet[2108]: E1009 07:17:54.548300 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.649037 kubelet[2108]: E1009 07:17:54.648991 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.677112 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... Oct 9 07:17:54.677129 systemd[1]: Reloading... Oct 9 07:17:54.751791 kubelet[2108]: E1009 07:17:54.749668 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.756644 zram_generator::config[2425]: No configuration found. Oct 9 07:17:54.850502 kubelet[2108]: E1009 07:17:54.850455 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:54.920760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:17:54.951241 kubelet[2108]: E1009 07:17:54.951197 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:55.019750 systemd[1]: Reloading finished in 342 ms. Oct 9 07:17:55.051725 kubelet[2108]: E1009 07:17:55.051686 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:55.065053 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:55.079315 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:17:55.079737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:55.079791 systemd[1]: kubelet.service: Consumed 1.325s CPU time, 121.3M memory peak, 0B memory swap peak. Oct 9 07:17:55.085903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:17:55.232555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:17:55.237513 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:17:55.284062 kubelet[2467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:55.284062 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:17:55.284062 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:17:55.284062 kubelet[2467]: I1009 07:17:55.283995 2467 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:17:55.291148 kubelet[2467]: I1009 07:17:55.291107 2467 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 07:17:55.291148 kubelet[2467]: I1009 07:17:55.291132 2467 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:17:55.291469 kubelet[2467]: I1009 07:17:55.291447 2467 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 07:17:55.293091 kubelet[2467]: I1009 07:17:55.293060 2467 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:17:55.294942 kubelet[2467]: I1009 07:17:55.294922 2467 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:17:55.297520 kubelet[2467]: E1009 07:17:55.297474 2467 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 07:17:55.297520 kubelet[2467]: I1009 07:17:55.297518 2467 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 07:17:55.301882 kubelet[2467]: I1009 07:17:55.301860 2467 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:17:55.301993 kubelet[2467]: I1009 07:17:55.301970 2467 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 07:17:55.302149 kubelet[2467]: I1009 07:17:55.302108 2467 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:17:55.302280 kubelet[2467]: I1009 07:17:55.302139 2467 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 07:17:55.302365 kubelet[2467]: I1009 07:17:55.302280 2467 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:17:55.302365 kubelet[2467]: I1009 07:17:55.302287 2467 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 07:17:55.302365 kubelet[2467]: I1009 07:17:55.302320 2467 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:55.302442 kubelet[2467]: I1009 07:17:55.302427 2467 kubelet.go:408] "Attempting to sync node with API server" Oct 9 07:17:55.302442 kubelet[2467]: I1009 07:17:55.302439 2467 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:17:55.302484 kubelet[2467]: I1009 07:17:55.302463 2467 kubelet.go:314] "Adding apiserver pod source" Oct 9 07:17:55.302484 kubelet[2467]: I1009 07:17:55.302472 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:17:55.303008 kubelet[2467]: I1009 07:17:55.302970 2467 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:17:55.303689 kubelet[2467]: I1009 07:17:55.303673 2467 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:17:55.304109 kubelet[2467]: I1009 07:17:55.304031 2467 server.go:1269] "Started kubelet" Oct 9 07:17:55.304474 kubelet[2467]: I1009 07:17:55.304302 2467 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:17:55.306766 kubelet[2467]: I1009 07:17:55.306738 2467 server.go:460] "Adding debug handlers to kubelet server" Oct 9 07:17:55.308110 kubelet[2467]: I1009 07:17:55.308009 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:17:55.309448 kubelet[2467]: I1009 07:17:55.304306 2467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:17:55.310968 kubelet[2467]: I1009 07:17:55.310945 2467 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:17:55.311287 kubelet[2467]: I1009 07:17:55.311202 2467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 07:17:55.311508 kubelet[2467]: E1009 07:17:55.311469 2467 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 07:17:55.311555 kubelet[2467]: I1009 07:17:55.311520 2467 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 07:17:55.312132 kubelet[2467]: I1009 07:17:55.311743 2467 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 07:17:55.312132 kubelet[2467]: I1009 07:17:55.311879 2467 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:17:55.321148 kubelet[2467]: E1009 07:17:55.321064 2467 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:17:55.323688 kubelet[2467]: I1009 07:17:55.323626 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:17:55.324002 kubelet[2467]: I1009 07:17:55.323979 2467 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:17:55.324111 kubelet[2467]: I1009 07:17:55.324097 2467 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:17:55.324281 kubelet[2467]: I1009 07:17:55.324257 2467 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:17:55.325628 kubelet[2467]: I1009 07:17:55.325602 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:17:55.325679 kubelet[2467]: I1009 07:17:55.325655 2467 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:17:55.325726 kubelet[2467]: I1009 07:17:55.325682 2467 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 07:17:55.325760 kubelet[2467]: E1009 07:17:55.325732 2467 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:17:55.359622 kubelet[2467]: I1009 07:17:55.359587 2467 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:17:55.359622 kubelet[2467]: I1009 07:17:55.359611 2467 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:17:55.359622 kubelet[2467]: I1009 07:17:55.359631 2467 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:17:55.359836 kubelet[2467]: I1009 07:17:55.359790 2467 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:17:55.359836 kubelet[2467]: I1009 07:17:55.359804 2467 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:17:55.359836 kubelet[2467]: I1009 07:17:55.359828 2467 policy_none.go:49] "None policy: Start" Oct 9 07:17:55.360286 kubelet[2467]: I1009 07:17:55.360257 2467 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:17:55.360286 kubelet[2467]: I1009 07:17:55.360284 2467 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:17:55.360465 kubelet[2467]: I1009 07:17:55.360443 2467 state_mem.go:75] "Updated machine memory state" Oct 9 07:17:55.364651 kubelet[2467]: I1009 07:17:55.364554 2467 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:17:55.364823 kubelet[2467]: I1009 07:17:55.364800 2467 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 07:17:55.364864 kubelet[2467]: I1009 07:17:55.364821 2467 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:17:55.365067 kubelet[2467]: I1009 07:17:55.365035 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:17:55.469763 kubelet[2467]: I1009 07:17:55.469711 2467 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 07:17:55.475358 kubelet[2467]: I1009 07:17:55.475327 2467 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Oct 9 07:17:55.475457 kubelet[2467]: I1009 07:17:55.475398 2467 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 9 07:17:55.613348 kubelet[2467]: I1009 07:17:55.613192 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e312a2243b5148d05ecb06afc711a15e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e312a2243b5148d05ecb06afc711a15e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:17:55.613348 kubelet[2467]: I1009 07:17:55.613225 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:55.613348 kubelet[2467]: I1009 07:17:55.613248 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:55.613348 kubelet[2467]: I1009 07:17:55.613277 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:55.613348 kubelet[2467]: I1009 07:17:55.613296 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:55.613616 kubelet[2467]: I1009 07:17:55.613313 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 07:17:55.613616 kubelet[2467]: I1009 07:17:55.613327 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e312a2243b5148d05ecb06afc711a15e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e312a2243b5148d05ecb06afc711a15e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:17:55.613616 kubelet[2467]: I1009 07:17:55.613342 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 9 07:17:55.613616 kubelet[2467]: I1009 07:17:55.613362 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e312a2243b5148d05ecb06afc711a15e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e312a2243b5148d05ecb06afc711a15e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 07:17:55.735814 kubelet[2467]: E1009 07:17:55.735776 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:55.736800 kubelet[2467]: E1009 07:17:55.736751 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:55.737021 kubelet[2467]: E1009 07:17:55.736922 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:56.303155 kubelet[2467]: I1009 07:17:56.302973 2467 apiserver.go:52] "Watching apiserver" Oct 9 07:17:56.312410 kubelet[2467]: I1009 07:17:56.312353 2467 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 07:17:56.340216 kubelet[2467]: E1009 07:17:56.340187 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:56.341273 kubelet[2467]: E1009 07:17:56.341242 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:56.348839 kubelet[2467]: E1009 07:17:56.348786 2467 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 07:17:56.349864 kubelet[2467]: E1009 07:17:56.348984 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:56.359625 kubelet[2467]: I1009 07:17:56.359417 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.359401386 podStartE2EDuration="1.359401386s" podCreationTimestamp="2024-10-09 07:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:56.358510335 +0000 UTC m=+1.116862429" watchObservedRunningTime="2024-10-09 07:17:56.359401386 +0000 UTC m=+1.117753470" Oct 9 07:17:56.373401 kubelet[2467]: I1009 07:17:56.373337 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3733141 podStartE2EDuration="1.3733141s" podCreationTimestamp="2024-10-09 07:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:56.367028459 +0000 UTC m=+1.125380553" watchObservedRunningTime="2024-10-09 07:17:56.3733141 +0000 UTC m=+1.131666184" Oct 9 07:17:57.342168 kubelet[2467]: E1009 07:17:57.341786 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:17:59.590147 kubelet[2467]: I1009 07:17:59.590103 2467 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:17:59.590705 kubelet[2467]: I1009 07:17:59.590524 2467 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:17:59.590749 containerd[1459]: time="2024-10-09T07:17:59.590362268Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:17:59.700433 kubelet[2467]: I1009 07:17:59.700372 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.700354721 podStartE2EDuration="4.700354721s" podCreationTimestamp="2024-10-09 07:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:17:56.373777695 +0000 UTC m=+1.132129779" watchObservedRunningTime="2024-10-09 07:17:59.700354721 +0000 UTC m=+4.458706805" Oct 9 07:17:59.708320 systemd[1]: Created slice kubepods-besteffort-pod176176f4_ad8f_44b0_8f2b_467a79611618.slice - libcontainer container kubepods-besteffort-pod176176f4_ad8f_44b0_8f2b_467a79611618.slice. Oct 9 07:17:59.739202 kubelet[2467]: I1009 07:17:59.739117 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/176176f4-ad8f-44b0-8f2b-467a79611618-xtables-lock\") pod \"kube-proxy-tb6tm\" (UID: \"176176f4-ad8f-44b0-8f2b-467a79611618\") " pod="kube-system/kube-proxy-tb6tm" Oct 9 07:17:59.739202 kubelet[2467]: I1009 07:17:59.739179 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/176176f4-ad8f-44b0-8f2b-467a79611618-lib-modules\") pod \"kube-proxy-tb6tm\" (UID: \"176176f4-ad8f-44b0-8f2b-467a79611618\") " pod="kube-system/kube-proxy-tb6tm" Oct 9 07:17:59.739202 kubelet[2467]: I1009 07:17:59.739197 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlp4\" (UniqueName: \"kubernetes.io/projected/176176f4-ad8f-44b0-8f2b-467a79611618-kube-api-access-6zlp4\") pod \"kube-proxy-tb6tm\" (UID: \"176176f4-ad8f-44b0-8f2b-467a79611618\") " pod="kube-system/kube-proxy-tb6tm" Oct 9 07:17:59.739202 kubelet[2467]: I1009 07:17:59.739218 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/176176f4-ad8f-44b0-8f2b-467a79611618-kube-proxy\") pod \"kube-proxy-tb6tm\" (UID: \"176176f4-ad8f-44b0-8f2b-467a79611618\") " pod="kube-system/kube-proxy-tb6tm" Oct 9 07:17:59.846699 kubelet[2467]: E1009 07:17:59.846533 2467 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 9 07:17:59.846699 kubelet[2467]: E1009 07:17:59.846588 2467 projected.go:194] Error preparing data for projected volume kube-api-access-6zlp4 for pod kube-system/kube-proxy-tb6tm: configmap "kube-root-ca.crt" not found Oct 9 07:17:59.846699 kubelet[2467]: E1009 07:17:59.846681 2467 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/176176f4-ad8f-44b0-8f2b-467a79611618-kube-api-access-6zlp4 podName:176176f4-ad8f-44b0-8f2b-467a79611618 nodeName:}" failed. No retries permitted until 2024-10-09 07:18:00.34665166 +0000 UTC m=+5.105003754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6zlp4" (UniqueName: "kubernetes.io/projected/176176f4-ad8f-44b0-8f2b-467a79611618-kube-api-access-6zlp4") pod "kube-proxy-tb6tm" (UID: "176176f4-ad8f-44b0-8f2b-467a79611618") : configmap "kube-root-ca.crt" not found Oct 9 07:18:00.004650 sudo[1642]: pam_unix(sudo:session): session closed for user root Oct 9 07:18:00.006821 sshd[1639]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:00.010976 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:52868.service: Deactivated successfully. Oct 9 07:18:00.013081 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:18:00.013305 systemd[1]: session-7.scope: Consumed 4.546s CPU time, 106.3M memory peak, 0B memory swap peak. Oct 9 07:18:00.013810 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:18:00.014745 systemd-logind[1442]: Removed session 7. Oct 9 07:18:00.442699 kubelet[2467]: E1009 07:18:00.442646 2467 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 9 07:18:00.442699 kubelet[2467]: E1009 07:18:00.442679 2467 projected.go:194] Error preparing data for projected volume kube-api-access-6zlp4 for pod kube-system/kube-proxy-tb6tm: configmap "kube-root-ca.crt" not found Oct 9 07:18:00.442867 kubelet[2467]: E1009 07:18:00.442726 2467 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/176176f4-ad8f-44b0-8f2b-467a79611618-kube-api-access-6zlp4 podName:176176f4-ad8f-44b0-8f2b-467a79611618 nodeName:}" failed. No retries permitted until 2024-10-09 07:18:01.442711238 +0000 UTC m=+6.201063322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6zlp4" (UniqueName: "kubernetes.io/projected/176176f4-ad8f-44b0-8f2b-467a79611618-kube-api-access-6zlp4") pod "kube-proxy-tb6tm" (UID: "176176f4-ad8f-44b0-8f2b-467a79611618") : configmap "kube-root-ca.crt" not found Oct 9 07:18:00.910877 systemd[1]: Created slice kubepods-besteffort-podda09ca45_aeca_4406_aafb_36c9b5906333.slice - libcontainer container kubepods-besteffort-podda09ca45_aeca_4406_aafb_36c9b5906333.slice. Oct 9 07:18:00.945537 kubelet[2467]: I1009 07:18:00.945484 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da09ca45-aeca-4406-aafb-36c9b5906333-var-lib-calico\") pod \"tigera-operator-55748b469f-5qvnv\" (UID: \"da09ca45-aeca-4406-aafb-36c9b5906333\") " pod="tigera-operator/tigera-operator-55748b469f-5qvnv" Oct 9 07:18:00.945537 kubelet[2467]: I1009 07:18:00.945521 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdbj2\" (UniqueName: \"kubernetes.io/projected/da09ca45-aeca-4406-aafb-36c9b5906333-kube-api-access-vdbj2\") pod \"tigera-operator-55748b469f-5qvnv\" (UID: \"da09ca45-aeca-4406-aafb-36c9b5906333\") " pod="tigera-operator/tigera-operator-55748b469f-5qvnv" Oct 9 07:18:01.214648 containerd[1459]: time="2024-10-09T07:18:01.214492913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-5qvnv,Uid:da09ca45-aeca-4406-aafb-36c9b5906333,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:18:01.240431 containerd[1459]: time="2024-10-09T07:18:01.240329578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:01.240431 containerd[1459]: time="2024-10-09T07:18:01.240393650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:01.240431 containerd[1459]: time="2024-10-09T07:18:01.240413598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:01.240431 containerd[1459]: time="2024-10-09T07:18:01.240427093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:01.265719 systemd[1]: Started cri-containerd-351a995554c7690dfbf7b0fc55607916e41d57e518898b90dcc57cd0e67022c8.scope - libcontainer container 351a995554c7690dfbf7b0fc55607916e41d57e518898b90dcc57cd0e67022c8. Oct 9 07:18:01.303128 containerd[1459]: time="2024-10-09T07:18:01.303063131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-5qvnv,Uid:da09ca45-aeca-4406-aafb-36c9b5906333,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"351a995554c7690dfbf7b0fc55607916e41d57e518898b90dcc57cd0e67022c8\"" Oct 9 07:18:01.304994 containerd[1459]: time="2024-10-09T07:18:01.304765796Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:18:01.519813 kubelet[2467]: E1009 07:18:01.519709 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:01.520109 containerd[1459]: time="2024-10-09T07:18:01.520064004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb6tm,Uid:176176f4-ad8f-44b0-8f2b-467a79611618,Namespace:kube-system,Attempt:0,}" Oct 9 07:18:01.545661 containerd[1459]: time="2024-10-09T07:18:01.545545355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:01.545661 containerd[1459]: time="2024-10-09T07:18:01.545626919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:01.545812 containerd[1459]: time="2024-10-09T07:18:01.545642819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:01.545812 containerd[1459]: time="2024-10-09T07:18:01.545655304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:01.569714 systemd[1]: Started cri-containerd-93a11cd20fec1f7ad02cb7c0f8e2956000a71a4b6c86c7750227e910f4619b74.scope - libcontainer container 93a11cd20fec1f7ad02cb7c0f8e2956000a71a4b6c86c7750227e910f4619b74. Oct 9 07:18:01.592262 containerd[1459]: time="2024-10-09T07:18:01.592217664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb6tm,Uid:176176f4-ad8f-44b0-8f2b-467a79611618,Namespace:kube-system,Attempt:0,} returns sandbox id \"93a11cd20fec1f7ad02cb7c0f8e2956000a71a4b6c86c7750227e910f4619b74\"" Oct 9 07:18:01.592892 kubelet[2467]: E1009 07:18:01.592871 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:01.595004 containerd[1459]: time="2024-10-09T07:18:01.594979370Z" level=info msg="CreateContainer within sandbox \"93a11cd20fec1f7ad02cb7c0f8e2956000a71a4b6c86c7750227e910f4619b74\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:18:01.620569 containerd[1459]: time="2024-10-09T07:18:01.620486650Z" level=info msg="CreateContainer within sandbox \"93a11cd20fec1f7ad02cb7c0f8e2956000a71a4b6c86c7750227e910f4619b74\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c6a73a59102264fe845d327c698f689baa5c5d1957980bb06bd7eb8c7c5beb1\"" Oct 9 07:18:01.621161 containerd[1459]: time="2024-10-09T07:18:01.621135462Z" level=info msg="StartContainer for \"5c6a73a59102264fe845d327c698f689baa5c5d1957980bb06bd7eb8c7c5beb1\"" Oct 9 07:18:01.647641 update_engine[1444]: I1009 07:18:01.647109 1444 update_attempter.cc:509] Updating boot flags... Oct 9 07:18:01.652726 systemd[1]: Started cri-containerd-5c6a73a59102264fe845d327c698f689baa5c5d1957980bb06bd7eb8c7c5beb1.scope - libcontainer container 5c6a73a59102264fe845d327c698f689baa5c5d1957980bb06bd7eb8c7c5beb1. Oct 9 07:18:01.658771 kubelet[2467]: E1009 07:18:01.658684 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:01.684597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2668) Oct 9 07:18:01.701110 containerd[1459]: time="2024-10-09T07:18:01.701061298Z" level=info msg="StartContainer for \"5c6a73a59102264fe845d327c698f689baa5c5d1957980bb06bd7eb8c7c5beb1\" returns successfully" Oct 9 07:18:01.721637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2670) Oct 9 07:18:02.350062 kubelet[2467]: E1009 07:18:02.350030 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:02.350636 kubelet[2467]: E1009 07:18:02.350609 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:02.413229 kubelet[2467]: I1009 07:18:02.413162 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tb6tm" podStartSLOduration=3.413142607 podStartE2EDuration="3.413142607s" podCreationTimestamp="2024-10-09 07:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:18:02.412713122 +0000 UTC m=+7.171065206" watchObservedRunningTime="2024-10-09 07:18:02.413142607 +0000 UTC m=+7.171494691" Oct 9 07:18:02.817120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1402618060.mount: Deactivated successfully. Oct 9 07:18:03.187896 containerd[1459]: time="2024-10-09T07:18:03.187711507Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:03.188555 containerd[1459]: time="2024-10-09T07:18:03.188499812Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136541" Oct 9 07:18:03.189796 containerd[1459]: time="2024-10-09T07:18:03.189759792Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:03.192113 containerd[1459]: time="2024-10-09T07:18:03.192072710Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:03.192975 containerd[1459]: time="2024-10-09T07:18:03.192944894Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.888151125s" Oct 9 07:18:03.193013 containerd[1459]: time="2024-10-09T07:18:03.192973789Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:18:03.195126 containerd[1459]: time="2024-10-09T07:18:03.195084923Z" level=info msg="CreateContainer within sandbox \"351a995554c7690dfbf7b0fc55607916e41d57e518898b90dcc57cd0e67022c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:18:03.208987 containerd[1459]: time="2024-10-09T07:18:03.208924505Z" level=info msg="CreateContainer within sandbox \"351a995554c7690dfbf7b0fc55607916e41d57e518898b90dcc57cd0e67022c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c076b36d87120b9c46539c1fac6124d61e95677310b2ef58f5d9942a1d1ca82\"" Oct 9 07:18:03.209662 containerd[1459]: time="2024-10-09T07:18:03.209628469Z" level=info msg="StartContainer for \"4c076b36d87120b9c46539c1fac6124d61e95677310b2ef58f5d9942a1d1ca82\"" Oct 9 07:18:03.243774 systemd[1]: Started cri-containerd-4c076b36d87120b9c46539c1fac6124d61e95677310b2ef58f5d9942a1d1ca82.scope - libcontainer container 4c076b36d87120b9c46539c1fac6124d61e95677310b2ef58f5d9942a1d1ca82. Oct 9 07:18:03.273183 containerd[1459]: time="2024-10-09T07:18:03.273116916Z" level=info msg="StartContainer for \"4c076b36d87120b9c46539c1fac6124d61e95677310b2ef58f5d9942a1d1ca82\" returns successfully" Oct 9 07:18:03.362361 kubelet[2467]: I1009 07:18:03.362283 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-5qvnv" podStartSLOduration=1.4727837240000001 podStartE2EDuration="3.362266256s" podCreationTimestamp="2024-10-09 07:18:00 +0000 UTC" firstStartedPulling="2024-10-09 07:18:01.304265245 +0000 UTC m=+6.062617329" lastFinishedPulling="2024-10-09 07:18:03.193747777 +0000 UTC m=+7.952099861" observedRunningTime="2024-10-09 07:18:03.362059804 +0000 UTC m=+8.120411908" watchObservedRunningTime="2024-10-09 07:18:03.362266256 +0000 UTC m=+8.120618340" Oct 9 07:18:04.190033 kubelet[2467]: E1009 07:18:04.189987 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:04.355234 kubelet[2467]: E1009 07:18:04.355189 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:05.115128 kubelet[2467]: E1009 07:18:05.115071 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:05.356261 kubelet[2467]: E1009 07:18:05.356225 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:06.621709 systemd[1]: Created slice kubepods-besteffort-pod903f6893_9bc7_4a67_b822_9a8d6779b8b2.slice - libcontainer container kubepods-besteffort-pod903f6893_9bc7_4a67_b822_9a8d6779b8b2.slice. Oct 9 07:18:06.663418 systemd[1]: Created slice kubepods-besteffort-podf175e8eb_1efd_40ad_bc04_455cd45447d7.slice - libcontainer container kubepods-besteffort-podf175e8eb_1efd_40ad_bc04_455cd45447d7.slice. Oct 9 07:18:06.681532 kubelet[2467]: I1009 07:18:06.681467 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-xtables-lock\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.681532 kubelet[2467]: I1009 07:18:06.681528 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f175e8eb-1efd-40ad-bc04-455cd45447d7-tigera-ca-bundle\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.681532 kubelet[2467]: I1009 07:18:06.681546 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903f6893-9bc7-4a67-b822-9a8d6779b8b2-tigera-ca-bundle\") pod \"calico-typha-66778d9c58-6ckcn\" (UID: \"903f6893-9bc7-4a67-b822-9a8d6779b8b2\") " pod="calico-system/calico-typha-66778d9c58-6ckcn" Oct 9 07:18:06.682068 kubelet[2467]: I1009 07:18:06.681564 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-lib-modules\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682068 kubelet[2467]: I1009 07:18:06.681591 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/903f6893-9bc7-4a67-b822-9a8d6779b8b2-typha-certs\") pod \"calico-typha-66778d9c58-6ckcn\" (UID: \"903f6893-9bc7-4a67-b822-9a8d6779b8b2\") " pod="calico-system/calico-typha-66778d9c58-6ckcn" Oct 9 07:18:06.682068 kubelet[2467]: I1009 07:18:06.681610 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f175e8eb-1efd-40ad-bc04-455cd45447d7-node-certs\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682068 kubelet[2467]: I1009 07:18:06.681648 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-cni-log-dir\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682068 kubelet[2467]: I1009 07:18:06.681663 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crgh\" (UniqueName: \"kubernetes.io/projected/903f6893-9bc7-4a67-b822-9a8d6779b8b2-kube-api-access-2crgh\") pod \"calico-typha-66778d9c58-6ckcn\" (UID: \"903f6893-9bc7-4a67-b822-9a8d6779b8b2\") " pod="calico-system/calico-typha-66778d9c58-6ckcn" Oct 9 07:18:06.682189 kubelet[2467]: I1009 07:18:06.681700 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-var-run-calico\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682189 kubelet[2467]: I1009 07:18:06.681717 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-cni-bin-dir\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682189 kubelet[2467]: I1009 07:18:06.681731 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-policysync\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682189 kubelet[2467]: I1009 07:18:06.681744 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-flexvol-driver-host\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682189 kubelet[2467]: I1009 07:18:06.681763 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgn8p\" (UniqueName: \"kubernetes.io/projected/f175e8eb-1efd-40ad-bc04-455cd45447d7-kube-api-access-kgn8p\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682312 kubelet[2467]: I1009 07:18:06.681783 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-var-lib-calico\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.682312 kubelet[2467]: I1009 07:18:06.681801 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f175e8eb-1efd-40ad-bc04-455cd45447d7-cni-net-dir\") pod \"calico-node-4hg95\" (UID: \"f175e8eb-1efd-40ad-bc04-455cd45447d7\") " pod="calico-system/calico-node-4hg95" Oct 9 07:18:06.766264 kubelet[2467]: E1009 07:18:06.766205 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:06.782462 kubelet[2467]: I1009 07:18:06.782330 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5964cbe6-abee-481b-af82-9fee53352333-varrun\") pod \"csi-node-driver-2lpj5\" (UID: \"5964cbe6-abee-481b-af82-9fee53352333\") " pod="calico-system/csi-node-driver-2lpj5" Oct 9 07:18:06.782462 kubelet[2467]: I1009 07:18:06.782378 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5964cbe6-abee-481b-af82-9fee53352333-socket-dir\") pod \"csi-node-driver-2lpj5\" (UID: \"5964cbe6-abee-481b-af82-9fee53352333\") " pod="calico-system/csi-node-driver-2lpj5" Oct 9 07:18:06.782462 kubelet[2467]: I1009 07:18:06.782407 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5964cbe6-abee-481b-af82-9fee53352333-registration-dir\") pod \"csi-node-driver-2lpj5\" (UID: \"5964cbe6-abee-481b-af82-9fee53352333\") " pod="calico-system/csi-node-driver-2lpj5" Oct 9 07:18:06.782747 kubelet[2467]: I1009 07:18:06.782484 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5964cbe6-abee-481b-af82-9fee53352333-kubelet-dir\") pod \"csi-node-driver-2lpj5\" (UID: \"5964cbe6-abee-481b-af82-9fee53352333\") " pod="calico-system/csi-node-driver-2lpj5" Oct 9 07:18:06.782747 kubelet[2467]: I1009 07:18:06.782649 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmg7\" (UniqueName: \"kubernetes.io/projected/5964cbe6-abee-481b-af82-9fee53352333-kube-api-access-ffmg7\") pod \"csi-node-driver-2lpj5\" (UID: \"5964cbe6-abee-481b-af82-9fee53352333\") " pod="calico-system/csi-node-driver-2lpj5" Oct 9 07:18:06.789418 kubelet[2467]: E1009 07:18:06.788958 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.790145 kubelet[2467]: W1009 07:18:06.789546 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.790145 kubelet[2467]: E1009 07:18:06.789676 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.791760 kubelet[2467]: E1009 07:18:06.791720 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.791760 kubelet[2467]: W1009 07:18:06.791751 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.792432 kubelet[2467]: E1009 07:18:06.792402 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.792432 kubelet[2467]: W1009 07:18:06.792424 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.793116 kubelet[2467]: E1009 07:18:06.793081 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.793116 kubelet[2467]: W1009 07:18:06.793107 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.794595 kubelet[2467]: E1009 07:18:06.793294 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.794595 kubelet[2467]: E1009 07:18:06.793327 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.794595 kubelet[2467]: E1009 07:18:06.793446 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.794595 kubelet[2467]: E1009 07:18:06.793726 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.794595 kubelet[2467]: W1009 07:18:06.793753 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.794595 kubelet[2467]: E1009 07:18:06.794178 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.794595 kubelet[2467]: E1009 07:18:06.794481 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.794595 kubelet[2467]: W1009 07:18:06.794493 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.794939 kubelet[2467]: E1009 07:18:06.794755 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.795809 kubelet[2467]: E1009 07:18:06.795667 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.795809 kubelet[2467]: W1009 07:18:06.795727 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.796069 kubelet[2467]: E1009 07:18:06.796024 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.796069 kubelet[2467]: W1009 07:18:06.796047 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.796225 kubelet[2467]: E1009 07:18:06.796140 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.796225 kubelet[2467]: E1009 07:18:06.796165 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.796643 kubelet[2467]: E1009 07:18:06.796623 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.796643 kubelet[2467]: W1009 07:18:06.796640 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.796776 kubelet[2467]: E1009 07:18:06.796728 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.797255 kubelet[2467]: E1009 07:18:06.797231 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.797255 kubelet[2467]: W1009 07:18:06.797246 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.797839 kubelet[2467]: E1009 07:18:06.797476 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.797839 kubelet[2467]: E1009 07:18:06.797765 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.797839 kubelet[2467]: W1009 07:18:06.797778 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.798069 kubelet[2467]: E1009 07:18:06.797931 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.799088 kubelet[2467]: E1009 07:18:06.798094 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.799088 kubelet[2467]: W1009 07:18:06.798104 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.799088 kubelet[2467]: E1009 07:18:06.798177 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.799088 kubelet[2467]: E1009 07:18:06.798365 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.799088 kubelet[2467]: W1009 07:18:06.798373 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.799088 kubelet[2467]: E1009 07:18:06.798434 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.799088 kubelet[2467]: E1009 07:18:06.798680 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.799088 kubelet[2467]: W1009 07:18:06.798690 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.799088 kubelet[2467]: E1009 07:18:06.798754 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.799088 kubelet[2467]: E1009 07:18:06.798935 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.799315 kubelet[2467]: W1009 07:18:06.798943 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.799315 kubelet[2467]: E1009 07:18:06.799001 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.800226 kubelet[2467]: E1009 07:18:06.800201 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.800226 kubelet[2467]: W1009 07:18:06.800220 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.800380 kubelet[2467]: E1009 07:18:06.800358 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.801698 kubelet[2467]: E1009 07:18:06.801666 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.801698 kubelet[2467]: W1009 07:18:06.801684 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.801806 kubelet[2467]: E1009 07:18:06.801779 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.802188 kubelet[2467]: E1009 07:18:06.802101 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.802188 kubelet[2467]: W1009 07:18:06.802117 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.802258 kubelet[2467]: E1009 07:18:06.802204 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.803164 kubelet[2467]: E1009 07:18:06.803073 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.803164 kubelet[2467]: W1009 07:18:06.803091 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.803693 kubelet[2467]: E1009 07:18:06.803649 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.804207 kubelet[2467]: E1009 07:18:06.804179 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.804352 kubelet[2467]: W1009 07:18:06.804204 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.804352 kubelet[2467]: E1009 07:18:06.804263 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.804592 kubelet[2467]: E1009 07:18:06.804552 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.804592 kubelet[2467]: W1009 07:18:06.804568 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.804756 kubelet[2467]: E1009 07:18:06.804680 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.805639 kubelet[2467]: E1009 07:18:06.804891 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.805639 kubelet[2467]: W1009 07:18:06.804902 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.805639 kubelet[2467]: E1009 07:18:06.804987 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.805725 kubelet[2467]: E1009 07:18:06.805670 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.805725 kubelet[2467]: W1009 07:18:06.805680 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.805804 kubelet[2467]: E1009 07:18:06.805777 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.805934 kubelet[2467]: E1009 07:18:06.805912 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.805934 kubelet[2467]: W1009 07:18:06.805927 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.806003 kubelet[2467]: E1009 07:18:06.805980 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809006 kubelet[2467]: E1009 07:18:06.806156 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809006 kubelet[2467]: W1009 07:18:06.806165 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809006 kubelet[2467]: E1009 07:18:06.806215 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809006 kubelet[2467]: E1009 07:18:06.806505 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809006 kubelet[2467]: W1009 07:18:06.806562 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809006 kubelet[2467]: E1009 07:18:06.806697 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809006 kubelet[2467]: E1009 07:18:06.806975 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809006 kubelet[2467]: W1009 07:18:06.806984 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809006 kubelet[2467]: E1009 07:18:06.807045 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809006 kubelet[2467]: E1009 07:18:06.807256 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809299 kubelet[2467]: W1009 07:18:06.807265 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809299 kubelet[2467]: E1009 07:18:06.807366 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809299 kubelet[2467]: E1009 07:18:06.807484 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809299 kubelet[2467]: W1009 07:18:06.807491 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809299 kubelet[2467]: E1009 07:18:06.807622 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809299 kubelet[2467]: E1009 07:18:06.807734 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809299 kubelet[2467]: W1009 07:18:06.807742 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809299 kubelet[2467]: E1009 07:18:06.807751 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809299 kubelet[2467]: E1009 07:18:06.808359 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809299 kubelet[2467]: W1009 07:18:06.808369 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809515 kubelet[2467]: E1009 07:18:06.808378 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.809515 kubelet[2467]: E1009 07:18:06.808568 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.809515 kubelet[2467]: W1009 07:18:06.808642 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.809515 kubelet[2467]: E1009 07:18:06.808652 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.819636 kubelet[2467]: E1009 07:18:06.818781 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.819636 kubelet[2467]: W1009 07:18:06.818806 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.819636 kubelet[2467]: E1009 07:18:06.818829 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.819636 kubelet[2467]: E1009 07:18:06.819080 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.819636 kubelet[2467]: W1009 07:18:06.819089 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.819636 kubelet[2467]: E1009 07:18:06.819098 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.828783 kubelet[2467]: E1009 07:18:06.828751 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.828998 kubelet[2467]: W1009 07:18:06.828906 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.828998 kubelet[2467]: E1009 07:18:06.828943 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.883893 kubelet[2467]: E1009 07:18:06.883746 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.883893 kubelet[2467]: W1009 07:18:06.883784 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.883893 kubelet[2467]: E1009 07:18:06.883809 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.884217 kubelet[2467]: E1009 07:18:06.884165 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.884217 kubelet[2467]: W1009 07:18:06.884193 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.884217 kubelet[2467]: E1009 07:18:06.884224 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.884754 kubelet[2467]: E1009 07:18:06.884726 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.884754 kubelet[2467]: W1009 07:18:06.884742 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.884823 kubelet[2467]: E1009 07:18:06.884761 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.885100 kubelet[2467]: E1009 07:18:06.885012 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.885100 kubelet[2467]: W1009 07:18:06.885025 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.885274 kubelet[2467]: E1009 07:18:06.885039 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.885424 kubelet[2467]: E1009 07:18:06.885390 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.885424 kubelet[2467]: W1009 07:18:06.885417 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.885486 kubelet[2467]: E1009 07:18:06.885431 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.887369 kubelet[2467]: E1009 07:18:06.887338 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.887369 kubelet[2467]: W1009 07:18:06.887353 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.887727 kubelet[2467]: E1009 07:18:06.887484 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.887779 kubelet[2467]: E1009 07:18:06.887754 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.887779 kubelet[2467]: W1009 07:18:06.887763 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.887878 kubelet[2467]: E1009 07:18:06.887860 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.888000 kubelet[2467]: E1009 07:18:06.887985 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.888037 kubelet[2467]: W1009 07:18:06.888020 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.888277 kubelet[2467]: E1009 07:18:06.888133 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.888432 kubelet[2467]: E1009 07:18:06.888404 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.888432 kubelet[2467]: W1009 07:18:06.888420 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.888607 kubelet[2467]: E1009 07:18:06.888562 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.888711 kubelet[2467]: E1009 07:18:06.888696 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.888711 kubelet[2467]: W1009 07:18:06.888707 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.888808 kubelet[2467]: E1009 07:18:06.888790 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.889020 kubelet[2467]: E1009 07:18:06.888993 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.889020 kubelet[2467]: W1009 07:18:06.889004 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.889120 kubelet[2467]: E1009 07:18:06.889104 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.889236 kubelet[2467]: E1009 07:18:06.889222 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.889236 kubelet[2467]: W1009 07:18:06.889232 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.889365 kubelet[2467]: E1009 07:18:06.889347 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.889453 kubelet[2467]: E1009 07:18:06.889438 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.889453 kubelet[2467]: W1009 07:18:06.889448 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.889616 kubelet[2467]: E1009 07:18:06.889559 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.889816 kubelet[2467]: E1009 07:18:06.889797 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.889816 kubelet[2467]: W1009 07:18:06.889809 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.889887 kubelet[2467]: E1009 07:18:06.889833 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.890090 kubelet[2467]: E1009 07:18:06.890075 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.890090 kubelet[2467]: W1009 07:18:06.890086 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.890216 kubelet[2467]: E1009 07:18:06.890178 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.890457 kubelet[2467]: E1009 07:18:06.890320 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.890457 kubelet[2467]: W1009 07:18:06.890329 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.890457 kubelet[2467]: E1009 07:18:06.890364 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.890635 kubelet[2467]: E1009 07:18:06.890617 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.890635 kubelet[2467]: W1009 07:18:06.890632 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.890701 kubelet[2467]: E1009 07:18:06.890671 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.890896 kubelet[2467]: E1009 07:18:06.890880 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.890896 kubelet[2467]: W1009 07:18:06.890893 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.891002 kubelet[2467]: E1009 07:18:06.890977 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.891185 kubelet[2467]: E1009 07:18:06.891164 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.891185 kubelet[2467]: W1009 07:18:06.891179 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.891310 kubelet[2467]: E1009 07:18:06.891288 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.891668 kubelet[2467]: E1009 07:18:06.891477 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.891668 kubelet[2467]: W1009 07:18:06.891490 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.891668 kubelet[2467]: E1009 07:18:06.891507 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.891861 kubelet[2467]: E1009 07:18:06.891840 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.891861 kubelet[2467]: W1009 07:18:06.891857 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.892015 kubelet[2467]: E1009 07:18:06.891980 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.892305 kubelet[2467]: E1009 07:18:06.892247 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.892305 kubelet[2467]: W1009 07:18:06.892267 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.892627 kubelet[2467]: E1009 07:18:06.892600 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.892754 kubelet[2467]: E1009 07:18:06.892673 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.892754 kubelet[2467]: W1009 07:18:06.892686 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.893032 kubelet[2467]: E1009 07:18:06.893013 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.893032 kubelet[2467]: W1009 07:18:06.893029 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.893315 kubelet[2467]: E1009 07:18:06.893298 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.893315 kubelet[2467]: W1009 07:18:06.893310 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.893387 kubelet[2467]: E1009 07:18:06.893314 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.893387 kubelet[2467]: E1009 07:18:06.893328 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.893387 kubelet[2467]: E1009 07:18:06.893322 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.901599 kubelet[2467]: E1009 07:18:06.900319 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:06.901599 kubelet[2467]: W1009 07:18:06.900336 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:06.901599 kubelet[2467]: E1009 07:18:06.900346 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:06.929480 kubelet[2467]: E1009 07:18:06.929439 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:06.930181 containerd[1459]: time="2024-10-09T07:18:06.930116594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66778d9c58-6ckcn,Uid:903f6893-9bc7-4a67-b822-9a8d6779b8b2,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:06.958308 containerd[1459]: time="2024-10-09T07:18:06.958177903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:06.958308 containerd[1459]: time="2024-10-09T07:18:06.958246353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:06.958308 containerd[1459]: time="2024-10-09T07:18:06.958265179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:06.958308 containerd[1459]: time="2024-10-09T07:18:06.958278594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:06.969010 kubelet[2467]: E1009 07:18:06.968236 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:06.970598 containerd[1459]: time="2024-10-09T07:18:06.969831493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4hg95,Uid:f175e8eb-1efd-40ad-bc04-455cd45447d7,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:06.983807 systemd[1]: Started cri-containerd-2bf65df9c19c25c6997dfd3cd575437bda722b6092c3887b7945b0d8677966df.scope - libcontainer container 2bf65df9c19c25c6997dfd3cd575437bda722b6092c3887b7945b0d8677966df. Oct 9 07:18:07.031200 containerd[1459]: time="2024-10-09T07:18:07.031154312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66778d9c58-6ckcn,Uid:903f6893-9bc7-4a67-b822-9a8d6779b8b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bf65df9c19c25c6997dfd3cd575437bda722b6092c3887b7945b0d8677966df\"" Oct 9 07:18:07.032814 kubelet[2467]: E1009 07:18:07.032779 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:07.034162 containerd[1459]: time="2024-10-09T07:18:07.033961724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:18:07.189632 containerd[1459]: time="2024-10-09T07:18:07.189014631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:07.189632 containerd[1459]: time="2024-10-09T07:18:07.189077740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:07.189632 containerd[1459]: time="2024-10-09T07:18:07.189098128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:07.189632 containerd[1459]: time="2024-10-09T07:18:07.189111483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:07.210753 systemd[1]: Started cri-containerd-3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5.scope - libcontainer container 3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5. Oct 9 07:18:07.236925 containerd[1459]: time="2024-10-09T07:18:07.236837013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4hg95,Uid:f175e8eb-1efd-40ad-bc04-455cd45447d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5\"" Oct 9 07:18:07.237749 kubelet[2467]: E1009 07:18:07.237716 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:08.326094 kubelet[2467]: E1009 07:18:08.326039 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:09.345167 containerd[1459]: time="2024-10-09T07:18:09.345111584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:09.345921 containerd[1459]: time="2024-10-09T07:18:09.345864617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:18:09.347143 containerd[1459]: time="2024-10-09T07:18:09.347096245Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:09.349216 containerd[1459]: time="2024-10-09T07:18:09.349181867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:09.349777 containerd[1459]: time="2024-10-09T07:18:09.349753757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.315712212s" Oct 9 07:18:09.349833 containerd[1459]: time="2024-10-09T07:18:09.349781670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:18:09.350601 containerd[1459]: time="2024-10-09T07:18:09.350556905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:18:09.359195 containerd[1459]: time="2024-10-09T07:18:09.358945969Z" level=info msg="CreateContainer within sandbox \"2bf65df9c19c25c6997dfd3cd575437bda722b6092c3887b7945b0d8677966df\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:18:09.379074 containerd[1459]: time="2024-10-09T07:18:09.379009405Z" level=info msg="CreateContainer within sandbox \"2bf65df9c19c25c6997dfd3cd575437bda722b6092c3887b7945b0d8677966df\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"425ffb900c8f626426156838a756ed9c22be7c47b900ee7275875dc1513aad30\"" Oct 9 07:18:09.379666 containerd[1459]: time="2024-10-09T07:18:09.379568432Z" level=info msg="StartContainer for \"425ffb900c8f626426156838a756ed9c22be7c47b900ee7275875dc1513aad30\"" Oct 9 07:18:09.420759 systemd[1]: Started cri-containerd-425ffb900c8f626426156838a756ed9c22be7c47b900ee7275875dc1513aad30.scope - libcontainer container 425ffb900c8f626426156838a756ed9c22be7c47b900ee7275875dc1513aad30. Oct 9 07:18:09.466192 containerd[1459]: time="2024-10-09T07:18:09.466114838Z" level=info msg="StartContainer for \"425ffb900c8f626426156838a756ed9c22be7c47b900ee7275875dc1513aad30\" returns successfully" Oct 9 07:18:10.327003 kubelet[2467]: E1009 07:18:10.326945 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:10.367680 kubelet[2467]: E1009 07:18:10.367598 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:10.375925 kubelet[2467]: I1009 07:18:10.375835 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66778d9c58-6ckcn" podStartSLOduration=2.05887541 podStartE2EDuration="4.375817729s" podCreationTimestamp="2024-10-09 07:18:06 +0000 UTC" firstStartedPulling="2024-10-09 07:18:07.033453632 +0000 UTC m=+11.791805716" lastFinishedPulling="2024-10-09 07:18:09.350395951 +0000 UTC m=+14.108748035" observedRunningTime="2024-10-09 07:18:10.375507453 +0000 UTC m=+15.133859537" watchObservedRunningTime="2024-10-09 07:18:10.375817729 +0000 UTC m=+15.134169813" Oct 9 07:18:10.399145 kubelet[2467]: E1009 07:18:10.399109 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.399145 kubelet[2467]: W1009 07:18:10.399128 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.399145 kubelet[2467]: E1009 07:18:10.399145 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.399343 kubelet[2467]: E1009 07:18:10.399318 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.399343 kubelet[2467]: W1009 07:18:10.399330 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.399428 kubelet[2467]: E1009 07:18:10.399338 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.399547 kubelet[2467]: E1009 07:18:10.399520 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.399547 kubelet[2467]: W1009 07:18:10.399533 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.399547 kubelet[2467]: E1009 07:18:10.399541 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.399751 kubelet[2467]: E1009 07:18:10.399727 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.399751 kubelet[2467]: W1009 07:18:10.399738 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.399751 kubelet[2467]: E1009 07:18:10.399747 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.399931 kubelet[2467]: E1009 07:18:10.399906 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.399931 kubelet[2467]: W1009 07:18:10.399918 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.399931 kubelet[2467]: E1009 07:18:10.399925 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.400120 kubelet[2467]: E1009 07:18:10.400104 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.400120 kubelet[2467]: W1009 07:18:10.400113 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.400120 kubelet[2467]: E1009 07:18:10.400121 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.400287 kubelet[2467]: E1009 07:18:10.400269 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.400287 kubelet[2467]: W1009 07:18:10.400279 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.400338 kubelet[2467]: E1009 07:18:10.400287 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.400480 kubelet[2467]: E1009 07:18:10.400453 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.400480 kubelet[2467]: W1009 07:18:10.400465 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.400480 kubelet[2467]: E1009 07:18:10.400472 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.400667 kubelet[2467]: E1009 07:18:10.400649 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.400667 kubelet[2467]: W1009 07:18:10.400660 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.400667 kubelet[2467]: E1009 07:18:10.400669 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.400836 kubelet[2467]: E1009 07:18:10.400817 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.400836 kubelet[2467]: W1009 07:18:10.400828 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.400836 kubelet[2467]: E1009 07:18:10.400836 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.401030 kubelet[2467]: E1009 07:18:10.401012 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.401030 kubelet[2467]: W1009 07:18:10.401025 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.401086 kubelet[2467]: E1009 07:18:10.401033 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.401280 kubelet[2467]: E1009 07:18:10.401264 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.401280 kubelet[2467]: W1009 07:18:10.401275 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.401346 kubelet[2467]: E1009 07:18:10.401284 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.401496 kubelet[2467]: E1009 07:18:10.401476 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.401496 kubelet[2467]: W1009 07:18:10.401488 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.401542 kubelet[2467]: E1009 07:18:10.401497 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.401732 kubelet[2467]: E1009 07:18:10.401716 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.401732 kubelet[2467]: W1009 07:18:10.401725 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.401794 kubelet[2467]: E1009 07:18:10.401734 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.401936 kubelet[2467]: E1009 07:18:10.401918 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.401936 kubelet[2467]: W1009 07:18:10.401928 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.401986 kubelet[2467]: E1009 07:18:10.401936 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.408220 kubelet[2467]: E1009 07:18:10.408199 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.408220 kubelet[2467]: W1009 07:18:10.408213 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.408220 kubelet[2467]: E1009 07:18:10.408225 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.408525 kubelet[2467]: E1009 07:18:10.408492 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.408550 kubelet[2467]: W1009 07:18:10.408521 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.408599 kubelet[2467]: E1009 07:18:10.408556 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.408822 kubelet[2467]: E1009 07:18:10.408801 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.408822 kubelet[2467]: W1009 07:18:10.408818 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.408881 kubelet[2467]: E1009 07:18:10.408833 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.409032 kubelet[2467]: E1009 07:18:10.409016 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.409032 kubelet[2467]: W1009 07:18:10.409029 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.409079 kubelet[2467]: E1009 07:18:10.409045 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.409235 kubelet[2467]: E1009 07:18:10.409221 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.409235 kubelet[2467]: W1009 07:18:10.409231 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.409281 kubelet[2467]: E1009 07:18:10.409244 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.409446 kubelet[2467]: E1009 07:18:10.409431 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.409446 kubelet[2467]: W1009 07:18:10.409443 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.409500 kubelet[2467]: E1009 07:18:10.409456 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.409680 kubelet[2467]: E1009 07:18:10.409665 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.409680 kubelet[2467]: W1009 07:18:10.409677 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.409737 kubelet[2467]: E1009 07:18:10.409691 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.409939 kubelet[2467]: E1009 07:18:10.409925 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.409939 kubelet[2467]: W1009 07:18:10.409937 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.409983 kubelet[2467]: E1009 07:18:10.409952 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.410955 kubelet[2467]: E1009 07:18:10.410324 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.410955 kubelet[2467]: W1009 07:18:10.410341 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.410955 kubelet[2467]: E1009 07:18:10.410360 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.410955 kubelet[2467]: E1009 07:18:10.410567 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.410955 kubelet[2467]: W1009 07:18:10.410592 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.410955 kubelet[2467]: E1009 07:18:10.410604 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.410955 kubelet[2467]: E1009 07:18:10.410798 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.410955 kubelet[2467]: W1009 07:18:10.410807 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.410955 kubelet[2467]: E1009 07:18:10.410816 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.411287 kubelet[2467]: E1009 07:18:10.411271 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.411341 kubelet[2467]: W1009 07:18:10.411329 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.411645 kubelet[2467]: E1009 07:18:10.411605 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.411789 kubelet[2467]: W1009 07:18:10.411775 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.411845 kubelet[2467]: E1009 07:18:10.411832 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.412064 kubelet[2467]: E1009 07:18:10.412046 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.412120 kubelet[2467]: W1009 07:18:10.412109 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.412177 kubelet[2467]: E1009 07:18:10.412165 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.412445 kubelet[2467]: E1009 07:18:10.412432 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.412513 kubelet[2467]: W1009 07:18:10.412501 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.412569 kubelet[2467]: E1009 07:18:10.412557 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.413095 kubelet[2467]: E1009 07:18:10.412927 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.413095 kubelet[2467]: E1009 07:18:10.412980 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.413095 kubelet[2467]: W1009 07:18:10.412989 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.413095 kubelet[2467]: E1009 07:18:10.413000 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.413503 kubelet[2467]: E1009 07:18:10.413367 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.413503 kubelet[2467]: W1009 07:18:10.413383 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.413503 kubelet[2467]: E1009 07:18:10.413397 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.414130 kubelet[2467]: E1009 07:18:10.413768 2467 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:18:10.414130 kubelet[2467]: W1009 07:18:10.413788 2467 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:18:10.414130 kubelet[2467]: E1009 07:18:10.413803 2467 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:18:10.595304 containerd[1459]: time="2024-10-09T07:18:10.595162977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:10.596104 containerd[1459]: time="2024-10-09T07:18:10.596030085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:18:10.597619 containerd[1459]: time="2024-10-09T07:18:10.597568892Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:10.600144 containerd[1459]: time="2024-10-09T07:18:10.600108920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:10.600740 containerd[1459]: time="2024-10-09T07:18:10.600706959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.250109928s" Oct 9 07:18:10.600782 containerd[1459]: time="2024-10-09T07:18:10.600738529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:18:10.603122 containerd[1459]: time="2024-10-09T07:18:10.603086464Z" level=info msg="CreateContainer within sandbox \"3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:18:10.617261 containerd[1459]: time="2024-10-09T07:18:10.617219770Z" level=info msg="CreateContainer within sandbox \"3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac\"" Oct 9 07:18:10.617721 containerd[1459]: time="2024-10-09T07:18:10.617686863Z" level=info msg="StartContainer for \"20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac\"" Oct 9 07:18:10.650706 systemd[1]: Started cri-containerd-20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac.scope - libcontainer container 20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac. Oct 9 07:18:10.678714 containerd[1459]: time="2024-10-09T07:18:10.678669211Z" level=info msg="StartContainer for \"20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac\" returns successfully" Oct 9 07:18:10.691772 systemd[1]: cri-containerd-20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac.scope: Deactivated successfully. Oct 9 07:18:11.009930 containerd[1459]: time="2024-10-09T07:18:11.009845327Z" level=info msg="shim disconnected" id=20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac namespace=k8s.io Oct 9 07:18:11.009930 containerd[1459]: time="2024-10-09T07:18:11.009920298Z" level=warning msg="cleaning up after shim disconnected" id=20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac namespace=k8s.io Oct 9 07:18:11.009930 containerd[1459]: time="2024-10-09T07:18:11.009932472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:18:11.356174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20824ca23bf2ab41562cef05535e6f7ea38541d9a8d2b19b62ee60ea8d8519ac-rootfs.mount: Deactivated successfully. Oct 9 07:18:11.371646 kubelet[2467]: I1009 07:18:11.371603 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:11.372136 kubelet[2467]: E1009 07:18:11.371935 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:11.372447 kubelet[2467]: E1009 07:18:11.372421 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:11.373207 containerd[1459]: time="2024-10-09T07:18:11.373178447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:18:12.325967 kubelet[2467]: E1009 07:18:12.325902 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:14.326833 kubelet[2467]: E1009 07:18:14.326748 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:16.326359 kubelet[2467]: E1009 07:18:16.326290 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:16.528147 containerd[1459]: time="2024-10-09T07:18:16.528102577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:16.529075 containerd[1459]: time="2024-10-09T07:18:16.529038911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:18:16.530382 containerd[1459]: time="2024-10-09T07:18:16.530358508Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:16.532742 containerd[1459]: time="2024-10-09T07:18:16.532686524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:16.533429 containerd[1459]: time="2024-10-09T07:18:16.533388337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 5.159904844s" Oct 9 07:18:16.533468 containerd[1459]: time="2024-10-09T07:18:16.533430627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:18:16.542527 containerd[1459]: time="2024-10-09T07:18:16.542489588Z" level=info msg="CreateContainer within sandbox \"3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:18:16.557983 containerd[1459]: time="2024-10-09T07:18:16.557933201Z" level=info msg="CreateContainer within sandbox \"3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e\"" Oct 9 07:18:16.560539 containerd[1459]: time="2024-10-09T07:18:16.560503865Z" level=info msg="StartContainer for \"87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e\"" Oct 9 07:18:16.603967 systemd[1]: Started cri-containerd-87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e.scope - libcontainer container 87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e. Oct 9 07:18:16.791872 containerd[1459]: time="2024-10-09T07:18:16.791814703Z" level=info msg="StartContainer for \"87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e\" returns successfully" Oct 9 07:18:17.383535 kubelet[2467]: E1009 07:18:17.383351 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:17.992795 containerd[1459]: time="2024-10-09T07:18:17.992719969Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:18:17.995931 systemd[1]: cri-containerd-87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e.scope: Deactivated successfully. Oct 9 07:18:18.020326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e-rootfs.mount: Deactivated successfully. Oct 9 07:18:18.041485 kubelet[2467]: I1009 07:18:18.041412 2467 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 9 07:18:18.060095 containerd[1459]: time="2024-10-09T07:18:18.060017634Z" level=info msg="shim disconnected" id=87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e namespace=k8s.io Oct 9 07:18:18.060095 containerd[1459]: time="2024-10-09T07:18:18.060084870Z" level=warning msg="cleaning up after shim disconnected" id=87d9866e8dfbef278bfd6fa5b5a31373d532d38c53b15e7faaca8d818701cc9e namespace=k8s.io Oct 9 07:18:18.060095 containerd[1459]: time="2024-10-09T07:18:18.060095009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:18:18.085150 systemd[1]: Created slice kubepods-besteffort-pod03c7dbdf_59ec_40e3_a4a2_c4039d2f5906.slice - libcontainer container kubepods-besteffort-pod03c7dbdf_59ec_40e3_a4a2_c4039d2f5906.slice. Oct 9 07:18:18.092189 systemd[1]: Created slice kubepods-burstable-podffd767f2_9155_40d2_a07b_4a4ed1f5045c.slice - libcontainer container kubepods-burstable-podffd767f2_9155_40d2_a07b_4a4ed1f5045c.slice. Oct 9 07:18:18.098625 systemd[1]: Created slice kubepods-burstable-podec9cda30_1f27_419f_b930_a288ad96787d.slice - libcontainer container kubepods-burstable-podec9cda30_1f27_419f_b930_a288ad96787d.slice. Oct 9 07:18:18.162316 kubelet[2467]: I1009 07:18:18.162275 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8t5w\" (UniqueName: \"kubernetes.io/projected/ffd767f2-9155-40d2-a07b-4a4ed1f5045c-kube-api-access-d8t5w\") pod \"coredns-6f6b679f8f-x27bc\" (UID: \"ffd767f2-9155-40d2-a07b-4a4ed1f5045c\") " pod="kube-system/coredns-6f6b679f8f-x27bc" Oct 9 07:18:18.162479 kubelet[2467]: I1009 07:18:18.162323 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7f6j\" (UniqueName: \"kubernetes.io/projected/ec9cda30-1f27-419f-b930-a288ad96787d-kube-api-access-p7f6j\") pod \"coredns-6f6b679f8f-zr4gk\" (UID: \"ec9cda30-1f27-419f-b930-a288ad96787d\") " pod="kube-system/coredns-6f6b679f8f-zr4gk" Oct 9 07:18:18.162479 kubelet[2467]: I1009 07:18:18.162347 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd767f2-9155-40d2-a07b-4a4ed1f5045c-config-volume\") pod \"coredns-6f6b679f8f-x27bc\" (UID: \"ffd767f2-9155-40d2-a07b-4a4ed1f5045c\") " pod="kube-system/coredns-6f6b679f8f-x27bc" Oct 9 07:18:18.162479 kubelet[2467]: I1009 07:18:18.162369 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c7dbdf-59ec-40e3-a4a2-c4039d2f5906-tigera-ca-bundle\") pod \"calico-kube-controllers-8556d8f995-5lbzv\" (UID: \"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906\") " pod="calico-system/calico-kube-controllers-8556d8f995-5lbzv" Oct 9 07:18:18.162479 kubelet[2467]: I1009 07:18:18.162419 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec9cda30-1f27-419f-b930-a288ad96787d-config-volume\") pod \"coredns-6f6b679f8f-zr4gk\" (UID: \"ec9cda30-1f27-419f-b930-a288ad96787d\") " pod="kube-system/coredns-6f6b679f8f-zr4gk" Oct 9 07:18:18.162479 kubelet[2467]: I1009 07:18:18.162458 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wlb\" (UniqueName: \"kubernetes.io/projected/03c7dbdf-59ec-40e3-a4a2-c4039d2f5906-kube-api-access-96wlb\") pod \"calico-kube-controllers-8556d8f995-5lbzv\" (UID: \"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906\") " pod="calico-system/calico-kube-controllers-8556d8f995-5lbzv" Oct 9 07:18:18.332532 systemd[1]: Created slice kubepods-besteffort-pod5964cbe6_abee_481b_af82_9fee53352333.slice - libcontainer container kubepods-besteffort-pod5964cbe6_abee_481b_af82_9fee53352333.slice. Oct 9 07:18:18.335236 containerd[1459]: time="2024-10-09T07:18:18.335175685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lpj5,Uid:5964cbe6-abee-481b-af82-9fee53352333,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:18.387379 kubelet[2467]: E1009 07:18:18.387318 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:18.388317 containerd[1459]: time="2024-10-09T07:18:18.388044442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:18:18.389955 containerd[1459]: time="2024-10-09T07:18:18.389747880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8556d8f995-5lbzv,Uid:03c7dbdf-59ec-40e3-a4a2-c4039d2f5906,Namespace:calico-system,Attempt:0,}" Oct 9 07:18:18.395752 kubelet[2467]: E1009 07:18:18.395442 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:18.396100 containerd[1459]: time="2024-10-09T07:18:18.396047885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x27bc,Uid:ffd767f2-9155-40d2-a07b-4a4ed1f5045c,Namespace:kube-system,Attempt:0,}" Oct 9 07:18:18.402759 kubelet[2467]: E1009 07:18:18.402714 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:18.403503 containerd[1459]: time="2024-10-09T07:18:18.403437662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zr4gk,Uid:ec9cda30-1f27-419f-b930-a288ad96787d,Namespace:kube-system,Attempt:0,}" Oct 9 07:18:18.418247 containerd[1459]: time="2024-10-09T07:18:18.418188933Z" level=error msg="Failed to destroy network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.418643 containerd[1459]: time="2024-10-09T07:18:18.418608593Z" level=error msg="encountered an error cleaning up failed sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.418687 containerd[1459]: time="2024-10-09T07:18:18.418657004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lpj5,Uid:5964cbe6-abee-481b-af82-9fee53352333,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.418937 kubelet[2467]: E1009 07:18:18.418882 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.419077 kubelet[2467]: E1009 07:18:18.418957 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2lpj5" Oct 9 07:18:18.419077 kubelet[2467]: E1009 07:18:18.418983 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2lpj5" Oct 9 07:18:18.419077 kubelet[2467]: E1009 07:18:18.419033 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2lpj5_calico-system(5964cbe6-abee-481b-af82-9fee53352333)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2lpj5_calico-system(5964cbe6-abee-481b-af82-9fee53352333)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:18.484551 containerd[1459]: time="2024-10-09T07:18:18.484451203Z" level=error msg="Failed to destroy network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.485623 containerd[1459]: time="2024-10-09T07:18:18.485108772Z" level=error msg="encountered an error cleaning up failed sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.485623 containerd[1459]: time="2024-10-09T07:18:18.485196898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8556d8f995-5lbzv,Uid:03c7dbdf-59ec-40e3-a4a2-c4039d2f5906,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.485885 kubelet[2467]: E1009 07:18:18.485509 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.485885 kubelet[2467]: E1009 07:18:18.485603 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8556d8f995-5lbzv" Oct 9 07:18:18.485885 kubelet[2467]: E1009 07:18:18.485626 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8556d8f995-5lbzv" Oct 9 07:18:18.486016 kubelet[2467]: E1009 07:18:18.485669 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8556d8f995-5lbzv_calico-system(03c7dbdf-59ec-40e3-a4a2-c4039d2f5906)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8556d8f995-5lbzv_calico-system(03c7dbdf-59ec-40e3-a4a2-c4039d2f5906)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8556d8f995-5lbzv" podUID="03c7dbdf-59ec-40e3-a4a2-c4039d2f5906" Oct 9 07:18:18.493756 containerd[1459]: time="2024-10-09T07:18:18.493705272Z" level=error msg="Failed to destroy network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.494152 containerd[1459]: time="2024-10-09T07:18:18.494124271Z" level=error msg="encountered an error cleaning up failed sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.494202 containerd[1459]: time="2024-10-09T07:18:18.494179405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x27bc,Uid:ffd767f2-9155-40d2-a07b-4a4ed1f5045c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.494453 kubelet[2467]: E1009 07:18:18.494406 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.494539 kubelet[2467]: E1009 07:18:18.494460 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-x27bc" Oct 9 07:18:18.494539 kubelet[2467]: E1009 07:18:18.494478 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-x27bc" Oct 9 07:18:18.494539 kubelet[2467]: E1009 07:18:18.494516 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-x27bc_kube-system(ffd767f2-9155-40d2-a07b-4a4ed1f5045c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-x27bc_kube-system(ffd767f2-9155-40d2-a07b-4a4ed1f5045c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-x27bc" podUID="ffd767f2-9155-40d2-a07b-4a4ed1f5045c" Oct 9 07:18:18.498023 containerd[1459]: time="2024-10-09T07:18:18.497966718Z" level=error msg="Failed to destroy network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.498435 containerd[1459]: time="2024-10-09T07:18:18.498399993Z" level=error msg="encountered an error cleaning up failed sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.498488 containerd[1459]: time="2024-10-09T07:18:18.498453555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zr4gk,Uid:ec9cda30-1f27-419f-b930-a288ad96787d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.498761 kubelet[2467]: E1009 07:18:18.498707 2467 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:18.498815 kubelet[2467]: E1009 07:18:18.498786 2467 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zr4gk" Oct 9 07:18:18.498815 kubelet[2467]: E1009 07:18:18.498809 2467 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zr4gk" Oct 9 07:18:18.498899 kubelet[2467]: E1009 07:18:18.498868 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-zr4gk_kube-system(ec9cda30-1f27-419f-b930-a288ad96787d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-zr4gk_kube-system(ec9cda30-1f27-419f-b930-a288ad96787d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zr4gk" podUID="ec9cda30-1f27-419f-b930-a288ad96787d" Oct 9 07:18:19.021163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33-shm.mount: Deactivated successfully. Oct 9 07:18:19.390068 kubelet[2467]: I1009 07:18:19.389842 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:19.390714 kubelet[2467]: I1009 07:18:19.390685 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:19.391035 containerd[1459]: time="2024-10-09T07:18:19.390990977Z" level=info msg="StopPodSandbox for \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\"" Oct 9 07:18:19.391594 containerd[1459]: time="2024-10-09T07:18:19.391172439Z" level=info msg="StopPodSandbox for \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\"" Oct 9 07:18:19.391594 containerd[1459]: time="2024-10-09T07:18:19.391364621Z" level=info msg="Ensure that sandbox 3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b in task-service has been cleanup successfully" Oct 9 07:18:19.391594 containerd[1459]: time="2024-10-09T07:18:19.391380531Z" level=info msg="Ensure that sandbox ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1 in task-service has been cleanup successfully" Oct 9 07:18:19.392424 kubelet[2467]: I1009 07:18:19.392387 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:19.393054 containerd[1459]: time="2024-10-09T07:18:19.392915040Z" level=info msg="StopPodSandbox for \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\"" Oct 9 07:18:19.393126 containerd[1459]: time="2024-10-09T07:18:19.393112271Z" level=info msg="Ensure that sandbox 23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663 in task-service has been cleanup successfully" Oct 9 07:18:19.394134 kubelet[2467]: I1009 07:18:19.394108 2467 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:19.397515 containerd[1459]: time="2024-10-09T07:18:19.396882720Z" level=info msg="StopPodSandbox for \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\"" Oct 9 07:18:19.397515 containerd[1459]: time="2024-10-09T07:18:19.397203754Z" level=info msg="Ensure that sandbox cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33 in task-service has been cleanup successfully" Oct 9 07:18:19.435865 containerd[1459]: time="2024-10-09T07:18:19.435791236Z" level=error msg="StopPodSandbox for \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\" failed" error="failed to destroy network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:19.436164 kubelet[2467]: E1009 07:18:19.436110 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:19.436266 kubelet[2467]: E1009 07:18:19.436209 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b"} Oct 9 07:18:19.436363 kubelet[2467]: E1009 07:18:19.436285 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffd767f2-9155-40d2-a07b-4a4ed1f5045c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:19.436480 kubelet[2467]: E1009 07:18:19.436352 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffd767f2-9155-40d2-a07b-4a4ed1f5045c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-x27bc" podUID="ffd767f2-9155-40d2-a07b-4a4ed1f5045c" Oct 9 07:18:19.436552 containerd[1459]: time="2024-10-09T07:18:19.436467038Z" level=error msg="StopPodSandbox for \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\" failed" error="failed to destroy network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:19.436688 kubelet[2467]: E1009 07:18:19.436656 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:19.436752 kubelet[2467]: E1009 07:18:19.436689 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1"} Oct 9 07:18:19.436752 kubelet[2467]: E1009 07:18:19.436717 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec9cda30-1f27-419f-b930-a288ad96787d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:19.436847 kubelet[2467]: E1009 07:18:19.436742 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec9cda30-1f27-419f-b930-a288ad96787d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zr4gk" podUID="ec9cda30-1f27-419f-b930-a288ad96787d" Oct 9 07:18:19.438424 containerd[1459]: time="2024-10-09T07:18:19.438376133Z" level=error msg="StopPodSandbox for \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\" failed" error="failed to destroy network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:19.438586 kubelet[2467]: E1009 07:18:19.438538 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:19.438631 kubelet[2467]: E1009 07:18:19.438615 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33"} Oct 9 07:18:19.438751 kubelet[2467]: E1009 07:18:19.438648 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5964cbe6-abee-481b-af82-9fee53352333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:19.438751 kubelet[2467]: E1009 07:18:19.438732 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5964cbe6-abee-481b-af82-9fee53352333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2lpj5" podUID="5964cbe6-abee-481b-af82-9fee53352333" Oct 9 07:18:19.438895 containerd[1459]: time="2024-10-09T07:18:19.438735389Z" level=error msg="StopPodSandbox for \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\" failed" error="failed to destroy network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:18:19.438930 kubelet[2467]: E1009 07:18:19.438884 2467 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:19.438955 kubelet[2467]: E1009 07:18:19.438928 2467 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663"} Oct 9 07:18:19.438987 kubelet[2467]: E1009 07:18:19.438958 2467 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:18:19.439029 kubelet[2467]: E1009 07:18:19.438981 2467 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8556d8f995-5lbzv" podUID="03c7dbdf-59ec-40e3-a4a2-c4039d2f5906" Oct 9 07:18:22.141949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162715025.mount: Deactivated successfully. Oct 9 07:18:22.731321 containerd[1459]: time="2024-10-09T07:18:22.731230992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:22.732185 containerd[1459]: time="2024-10-09T07:18:22.732137528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:18:22.733536 containerd[1459]: time="2024-10-09T07:18:22.733490894Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:22.736036 containerd[1459]: time="2024-10-09T07:18:22.735961943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:22.736547 containerd[1459]: time="2024-10-09T07:18:22.736502921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.348412252s" Oct 9 07:18:22.736547 containerd[1459]: time="2024-10-09T07:18:22.736543016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:18:22.747249 containerd[1459]: time="2024-10-09T07:18:22.747203312Z" level=info msg="CreateContainer within sandbox \"3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:18:22.765532 containerd[1459]: time="2024-10-09T07:18:22.765459601Z" level=info msg="CreateContainer within sandbox \"3fde9f3627b78cee86b6f35a9d2fb06d8912b090f180769ab2e1e015bcefd2a5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1f63b83f3b8a7f356e8683df4e4ce68d428ac7544531c7e865ae2937af129e11\"" Oct 9 07:18:22.766373 containerd[1459]: time="2024-10-09T07:18:22.766076591Z" level=info msg="StartContainer for \"1f63b83f3b8a7f356e8683df4e4ce68d428ac7544531c7e865ae2937af129e11\"" Oct 9 07:18:22.835764 systemd[1]: Started cri-containerd-1f63b83f3b8a7f356e8683df4e4ce68d428ac7544531c7e865ae2937af129e11.scope - libcontainer container 1f63b83f3b8a7f356e8683df4e4ce68d428ac7544531c7e865ae2937af129e11. Oct 9 07:18:22.872437 containerd[1459]: time="2024-10-09T07:18:22.872371013Z" level=info msg="StartContainer for \"1f63b83f3b8a7f356e8683df4e4ce68d428ac7544531c7e865ae2937af129e11\" returns successfully" Oct 9 07:18:22.946320 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:18:22.946522 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:18:23.413257 kubelet[2467]: E1009 07:18:23.413218 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:23.476230 kubelet[2467]: I1009 07:18:23.476099 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4hg95" podStartSLOduration=1.977125738 podStartE2EDuration="17.476075864s" podCreationTimestamp="2024-10-09 07:18:06 +0000 UTC" firstStartedPulling="2024-10-09 07:18:07.238289902 +0000 UTC m=+11.996641996" lastFinishedPulling="2024-10-09 07:18:22.737240038 +0000 UTC m=+27.495592122" observedRunningTime="2024-10-09 07:18:23.475552299 +0000 UTC m=+28.233904403" watchObservedRunningTime="2024-10-09 07:18:23.476075864 +0000 UTC m=+28.234427938" Oct 9 07:18:24.420010 kubelet[2467]: I1009 07:18:24.419960 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:24.420710 kubelet[2467]: E1009 07:18:24.420463 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:24.952212 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:52500.service - OpenSSH per-connection server daemon (10.0.0.1:52500). Oct 9 07:18:24.995157 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 52500 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:24.997260 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:25.002089 systemd-logind[1442]: New session 8 of user core. Oct 9 07:18:25.009867 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:18:25.158836 sshd[3662]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:25.163318 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:52500.service: Deactivated successfully. Oct 9 07:18:25.165878 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:18:25.166562 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:18:25.167542 systemd-logind[1442]: Removed session 8. Oct 9 07:18:25.799813 kubelet[2467]: I1009 07:18:25.799738 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:25.800386 kubelet[2467]: E1009 07:18:25.800221 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:26.425867 kubelet[2467]: E1009 07:18:26.425826 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:26.696617 kernel: bpftool[3764]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:18:26.941564 systemd-networkd[1396]: vxlan.calico: Link UP Oct 9 07:18:26.941590 systemd-networkd[1396]: vxlan.calico: Gained carrier Oct 9 07:18:28.057740 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL Oct 9 07:18:30.173715 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:39824.service - OpenSSH per-connection server daemon (10.0.0.1:39824). Oct 9 07:18:30.212534 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 39824 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:30.214430 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:30.218808 systemd-logind[1442]: New session 9 of user core. Oct 9 07:18:30.226901 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:18:30.367471 sshd[3841]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:30.371903 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:39824.service: Deactivated successfully. Oct 9 07:18:30.374151 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:18:30.374767 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:18:30.375742 systemd-logind[1442]: Removed session 9. Oct 9 07:18:31.327288 containerd[1459]: time="2024-10-09T07:18:31.326852501Z" level=info msg="StopPodSandbox for \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\"" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.400 [INFO][3871] k8s.go 608: Cleaning up netns ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.400 [INFO][3871] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" iface="eth0" netns="/var/run/netns/cni-a157c941-e4fc-5394-f3df-e2007d4732ec" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.401 [INFO][3871] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" iface="eth0" netns="/var/run/netns/cni-a157c941-e4fc-5394-f3df-e2007d4732ec" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.401 [INFO][3871] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" iface="eth0" netns="/var/run/netns/cni-a157c941-e4fc-5394-f3df-e2007d4732ec" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.401 [INFO][3871] k8s.go 615: Releasing IP address(es) ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.401 [INFO][3871] utils.go 188: Calico CNI releasing IP address ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.454 [INFO][3878] ipam_plugin.go 417: Releasing address using handleID ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.455 [INFO][3878] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.455 [INFO][3878] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.461 [WARNING][3878] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.462 [INFO][3878] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.463 [INFO][3878] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:31.469340 containerd[1459]: 2024-10-09 07:18:31.466 [INFO][3871] k8s.go 621: Teardown processing complete. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:31.470050 containerd[1459]: time="2024-10-09T07:18:31.469556162Z" level=info msg="TearDown network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\" successfully" Oct 9 07:18:31.470050 containerd[1459]: time="2024-10-09T07:18:31.469611766Z" level=info msg="StopPodSandbox for \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\" returns successfully" Oct 9 07:18:31.472621 systemd[1]: run-netns-cni\x2da157c941\x2de4fc\x2d5394\x2df3df\x2de2007d4732ec.mount: Deactivated successfully. Oct 9 07:18:31.479265 kubelet[2467]: E1009 07:18:31.479224 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:31.479934 containerd[1459]: time="2024-10-09T07:18:31.479638362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zr4gk,Uid:ec9cda30-1f27-419f-b930-a288ad96787d,Namespace:kube-system,Attempt:1,}" Oct 9 07:18:31.647382 systemd-networkd[1396]: calib1ded891298: Link UP Oct 9 07:18:31.647591 systemd-networkd[1396]: calib1ded891298: Gained carrier Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.529 [INFO][3886] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0 coredns-6f6b679f8f- kube-system ec9cda30-1f27-419f-b930-a288ad96787d 799 0 2024-10-09 07:18:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-zr4gk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1ded891298 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.529 [INFO][3886] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.560 [INFO][3900] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" HandleID="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.569 [INFO][3900] ipam_plugin.go 270: Auto assigning IP ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" HandleID="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4c10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-zr4gk", "timestamp":"2024-10-09 07:18:31.560041356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.569 [INFO][3900] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.569 [INFO][3900] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.569 [INFO][3900] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.572 [INFO][3900] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.577 [INFO][3900] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.581 [INFO][3900] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.583 [INFO][3900] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.586 [INFO][3900] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.586 [INFO][3900] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.587 [INFO][3900] ipam.go 1685: Creating new handle: k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6 Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.617 [INFO][3900] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.632 [INFO][3900] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.632 [INFO][3900] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" host="localhost" Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.632 [INFO][3900] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:31.673799 containerd[1459]: 2024-10-09 07:18:31.632 [INFO][3900] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" HandleID="k8s-pod-network.f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.674555 containerd[1459]: 2024-10-09 07:18:31.641 [INFO][3886] k8s.go 386: Populated endpoint ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ec9cda30-1f27-419f-b930-a288ad96787d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-zr4gk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ded891298", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:31.674555 containerd[1459]: 2024-10-09 07:18:31.641 [INFO][3886] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.674555 containerd[1459]: 2024-10-09 07:18:31.641 [INFO][3886] dataplane_linux.go 68: Setting the host side veth name to calib1ded891298 ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.674555 containerd[1459]: 2024-10-09 07:18:31.647 [INFO][3886] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.674555 containerd[1459]: 2024-10-09 07:18:31.648 [INFO][3886] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ec9cda30-1f27-419f-b930-a288ad96787d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6", Pod:"coredns-6f6b679f8f-zr4gk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ded891298", MAC:"e2:66:61:ea:0f:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:31.674555 containerd[1459]: 2024-10-09 07:18:31.660 [INFO][3886] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6" Namespace="kube-system" Pod="coredns-6f6b679f8f-zr4gk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:31.721529 containerd[1459]: time="2024-10-09T07:18:31.721393589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:31.721529 containerd[1459]: time="2024-10-09T07:18:31.721462128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:31.721529 containerd[1459]: time="2024-10-09T07:18:31.721476204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:31.721529 containerd[1459]: time="2024-10-09T07:18:31.721485982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:31.756976 systemd[1]: Started cri-containerd-f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6.scope - libcontainer container f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6. Oct 9 07:18:31.770128 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:18:31.798243 containerd[1459]: time="2024-10-09T07:18:31.798174508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zr4gk,Uid:ec9cda30-1f27-419f-b930-a288ad96787d,Namespace:kube-system,Attempt:1,} returns sandbox id \"f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6\"" Oct 9 07:18:31.799161 kubelet[2467]: E1009 07:18:31.799114 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:31.801103 containerd[1459]: time="2024-10-09T07:18:31.801042256Z" level=info msg="CreateContainer within sandbox \"f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:18:31.835166 containerd[1459]: time="2024-10-09T07:18:31.835096459Z" level=info msg="CreateContainer within sandbox \"f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2181ceee97c17dfbf50ea0177dd707e33675c2fd600b5a166959a41f7b4ef579\"" Oct 9 07:18:31.835872 containerd[1459]: time="2024-10-09T07:18:31.835846338Z" level=info msg="StartContainer for \"2181ceee97c17dfbf50ea0177dd707e33675c2fd600b5a166959a41f7b4ef579\"" Oct 9 07:18:31.869992 systemd[1]: Started cri-containerd-2181ceee97c17dfbf50ea0177dd707e33675c2fd600b5a166959a41f7b4ef579.scope - libcontainer container 2181ceee97c17dfbf50ea0177dd707e33675c2fd600b5a166959a41f7b4ef579. Oct 9 07:18:31.900478 containerd[1459]: time="2024-10-09T07:18:31.900307744Z" level=info msg="StartContainer for \"2181ceee97c17dfbf50ea0177dd707e33675c2fd600b5a166959a41f7b4ef579\" returns successfully" Oct 9 07:18:32.327695 containerd[1459]: time="2024-10-09T07:18:32.327608891Z" level=info msg="StopPodSandbox for \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\"" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.374 [INFO][4026] k8s.go 608: Cleaning up netns ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.375 [INFO][4026] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" iface="eth0" netns="/var/run/netns/cni-3d87f547-0657-decb-31c5-a50129808570" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.375 [INFO][4026] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" iface="eth0" netns="/var/run/netns/cni-3d87f547-0657-decb-31c5-a50129808570" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.375 [INFO][4026] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" iface="eth0" netns="/var/run/netns/cni-3d87f547-0657-decb-31c5-a50129808570" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.375 [INFO][4026] k8s.go 615: Releasing IP address(es) ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.375 [INFO][4026] utils.go 188: Calico CNI releasing IP address ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.396 [INFO][4033] ipam_plugin.go 417: Releasing address using handleID ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.396 [INFO][4033] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.396 [INFO][4033] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.402 [WARNING][4033] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.402 [INFO][4033] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.404 [INFO][4033] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:32.410034 containerd[1459]: 2024-10-09 07:18:32.406 [INFO][4026] k8s.go 621: Teardown processing complete. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:32.410611 containerd[1459]: time="2024-10-09T07:18:32.410247145Z" level=info msg="TearDown network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\" successfully" Oct 9 07:18:32.410611 containerd[1459]: time="2024-10-09T07:18:32.410285296Z" level=info msg="StopPodSandbox for \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\" returns successfully" Oct 9 07:18:32.411135 containerd[1459]: time="2024-10-09T07:18:32.411098053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lpj5,Uid:5964cbe6-abee-481b-af82-9fee53352333,Namespace:calico-system,Attempt:1,}" Oct 9 07:18:32.439767 kubelet[2467]: E1009 07:18:32.439733 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:32.452430 kubelet[2467]: I1009 07:18:32.452309 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zr4gk" podStartSLOduration=32.452285147 podStartE2EDuration="32.452285147s" podCreationTimestamp="2024-10-09 07:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:18:32.452036439 +0000 UTC m=+37.210388534" watchObservedRunningTime="2024-10-09 07:18:32.452285147 +0000 UTC m=+37.210637231" Oct 9 07:18:32.478707 systemd[1]: run-netns-cni\x2d3d87f547\x2d0657\x2ddecb\x2d31c5\x2da50129808570.mount: Deactivated successfully. Oct 9 07:18:32.541386 systemd-networkd[1396]: cali5c20a384ca3: Link UP Oct 9 07:18:32.541638 systemd-networkd[1396]: cali5c20a384ca3: Gained carrier Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.458 [INFO][4041] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2lpj5-eth0 csi-node-driver- calico-system 5964cbe6-abee-481b-af82-9fee53352333 814 0 2024-10-09 07:18:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-2lpj5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5c20a384ca3 [] []}} ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.458 [INFO][4041] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.499 [INFO][4056] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" HandleID="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.508 [INFO][4056] ipam_plugin.go 270: Auto assigning IP ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" HandleID="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315370), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2lpj5", "timestamp":"2024-10-09 07:18:32.499485967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.509 [INFO][4056] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.509 [INFO][4056] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.509 [INFO][4056] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.511 [INFO][4056] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.516 [INFO][4056] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.520 [INFO][4056] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.522 [INFO][4056] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.525 [INFO][4056] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.525 [INFO][4056] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.526 [INFO][4056] ipam.go 1685: Creating new handle: k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141 Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.529 [INFO][4056] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.535 [INFO][4056] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.535 [INFO][4056] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" host="localhost" Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.535 [INFO][4056] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:32.555299 containerd[1459]: 2024-10-09 07:18:32.535 [INFO][4056] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" HandleID="k8s-pod-network.3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.555858 containerd[1459]: 2024-10-09 07:18:32.538 [INFO][4041] k8s.go 386: Populated endpoint ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lpj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5964cbe6-abee-481b-af82-9fee53352333", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2lpj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c20a384ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:32.555858 containerd[1459]: 2024-10-09 07:18:32.538 [INFO][4041] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.555858 containerd[1459]: 2024-10-09 07:18:32.538 [INFO][4041] dataplane_linux.go 68: Setting the host side veth name to cali5c20a384ca3 ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.555858 containerd[1459]: 2024-10-09 07:18:32.541 [INFO][4041] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.555858 containerd[1459]: 2024-10-09 07:18:32.542 [INFO][4041] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lpj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5964cbe6-abee-481b-af82-9fee53352333", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141", Pod:"csi-node-driver-2lpj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c20a384ca3", MAC:"92:a5:8f:29:e0:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:32.555858 containerd[1459]: 2024-10-09 07:18:32.551 [INFO][4041] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141" Namespace="calico-system" Pod="csi-node-driver-2lpj5" WorkloadEndpoint="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:32.576501 containerd[1459]: time="2024-10-09T07:18:32.576411872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:32.576501 containerd[1459]: time="2024-10-09T07:18:32.576467496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:32.576501 containerd[1459]: time="2024-10-09T07:18:32.576483235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:32.576501 containerd[1459]: time="2024-10-09T07:18:32.576493194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:32.598736 systemd[1]: Started cri-containerd-3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141.scope - libcontainer container 3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141. Oct 9 07:18:32.610155 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:18:32.621008 containerd[1459]: time="2024-10-09T07:18:32.620961442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lpj5,Uid:5964cbe6-abee-481b-af82-9fee53352333,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141\"" Oct 9 07:18:32.622758 containerd[1459]: time="2024-10-09T07:18:32.622721638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:18:32.793848 systemd-networkd[1396]: calib1ded891298: Gained IPv6LL Oct 9 07:18:33.443011 kubelet[2467]: E1009 07:18:33.442969 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:33.999507 kubelet[2467]: I1009 07:18:33.999445 2467 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:18:34.000035 kubelet[2467]: E1009 07:18:33.999977 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:34.009882 systemd-networkd[1396]: cali5c20a384ca3: Gained IPv6LL Oct 9 07:18:34.199530 systemd[1]: run-containerd-runc-k8s.io-1f63b83f3b8a7f356e8683df4e4ce68d428ac7544531c7e865ae2937af129e11-runc.WHe6aj.mount: Deactivated successfully. Oct 9 07:18:34.320828 containerd[1459]: time="2024-10-09T07:18:34.320625118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:34.321861 containerd[1459]: time="2024-10-09T07:18:34.321809612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:18:34.323229 containerd[1459]: time="2024-10-09T07:18:34.323192879Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:34.325837 containerd[1459]: time="2024-10-09T07:18:34.325757536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:34.326193 containerd[1459]: time="2024-10-09T07:18:34.326161384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.703404049s" Oct 9 07:18:34.326193 containerd[1459]: time="2024-10-09T07:18:34.326192052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:18:34.328311 containerd[1459]: time="2024-10-09T07:18:34.327724610Z" level=info msg="StopPodSandbox for \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\"" Oct 9 07:18:34.328311 containerd[1459]: time="2024-10-09T07:18:34.327744027Z" level=info msg="StopPodSandbox for \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\"" Oct 9 07:18:34.329279 containerd[1459]: time="2024-10-09T07:18:34.329131632Z" level=info msg="CreateContainer within sandbox \"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:18:34.368461 containerd[1459]: time="2024-10-09T07:18:34.368280111Z" level=info msg="CreateContainer within sandbox \"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f736b907d89b20ff96830659ee8563a53cbe10cb8e03dde704815a5b69151fed\"" Oct 9 07:18:34.369290 containerd[1459]: time="2024-10-09T07:18:34.369240334Z" level=info msg="StartContainer for \"f736b907d89b20ff96830659ee8563a53cbe10cb8e03dde704815a5b69151fed\"" Oct 9 07:18:34.409043 systemd[1]: Started cri-containerd-f736b907d89b20ff96830659ee8563a53cbe10cb8e03dde704815a5b69151fed.scope - libcontainer container f736b907d89b20ff96830659ee8563a53cbe10cb8e03dde704815a5b69151fed. Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.388 [INFO][4203] k8s.go 608: Cleaning up netns ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.388 [INFO][4203] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" iface="eth0" netns="/var/run/netns/cni-7c04b7ea-f210-2f4a-9055-e5cf46dac66d" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.389 [INFO][4203] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" iface="eth0" netns="/var/run/netns/cni-7c04b7ea-f210-2f4a-9055-e5cf46dac66d" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.389 [INFO][4203] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" iface="eth0" netns="/var/run/netns/cni-7c04b7ea-f210-2f4a-9055-e5cf46dac66d" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.389 [INFO][4203] k8s.go 615: Releasing IP address(es) ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.389 [INFO][4203] utils.go 188: Calico CNI releasing IP address ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.415 [INFO][4236] ipam_plugin.go 417: Releasing address using handleID ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.415 [INFO][4236] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.415 [INFO][4236] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.421 [WARNING][4236] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.421 [INFO][4236] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.423 [INFO][4236] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:34.429215 containerd[1459]: 2024-10-09 07:18:34.426 [INFO][4203] k8s.go 621: Teardown processing complete. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:34.431087 containerd[1459]: time="2024-10-09T07:18:34.429444805Z" level=info msg="TearDown network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\" successfully" Oct 9 07:18:34.431087 containerd[1459]: time="2024-10-09T07:18:34.429486032Z" level=info msg="StopPodSandbox for \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\" returns successfully" Oct 9 07:18:34.431165 kubelet[2467]: E1009 07:18:34.430020 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:34.432731 containerd[1459]: time="2024-10-09T07:18:34.432182496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x27bc,Uid:ffd767f2-9155-40d2-a07b-4a4ed1f5045c,Namespace:kube-system,Attempt:1,}" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.404 [INFO][4202] k8s.go 608: Cleaning up netns ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.405 [INFO][4202] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" iface="eth0" netns="/var/run/netns/cni-28049447-ae35-3cfa-8c1f-9f72a36581dd" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.405 [INFO][4202] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" iface="eth0" netns="/var/run/netns/cni-28049447-ae35-3cfa-8c1f-9f72a36581dd" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.405 [INFO][4202] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" iface="eth0" netns="/var/run/netns/cni-28049447-ae35-3cfa-8c1f-9f72a36581dd" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.405 [INFO][4202] k8s.go 615: Releasing IP address(es) ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.405 [INFO][4202] utils.go 188: Calico CNI releasing IP address ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.428 [INFO][4244] ipam_plugin.go 417: Releasing address using handleID ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.428 [INFO][4244] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.428 [INFO][4244] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.434 [WARNING][4244] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.434 [INFO][4244] ipam_plugin.go 445: Releasing address using workloadID ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.436 [INFO][4244] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:34.443887 containerd[1459]: 2024-10-09 07:18:34.439 [INFO][4202] k8s.go 621: Teardown processing complete. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:34.444302 containerd[1459]: time="2024-10-09T07:18:34.444206788Z" level=info msg="TearDown network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\" successfully" Oct 9 07:18:34.444302 containerd[1459]: time="2024-10-09T07:18:34.444243386Z" level=info msg="StopPodSandbox for \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\" returns successfully" Oct 9 07:18:34.444966 containerd[1459]: time="2024-10-09T07:18:34.444922612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8556d8f995-5lbzv,Uid:03c7dbdf-59ec-40e3-a4a2-c4039d2f5906,Namespace:calico-system,Attempt:1,}" Oct 9 07:18:34.449896 kubelet[2467]: E1009 07:18:34.449868 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:34.450456 kubelet[2467]: E1009 07:18:34.450412 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:34.462251 containerd[1459]: time="2024-10-09T07:18:34.461741427Z" level=info msg="StartContainer for \"f736b907d89b20ff96830659ee8563a53cbe10cb8e03dde704815a5b69151fed\" returns successfully" Oct 9 07:18:34.466742 containerd[1459]: time="2024-10-09T07:18:34.466696902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:18:34.566988 systemd-networkd[1396]: cali29c7eb1ae3c: Link UP Oct 9 07:18:34.567236 systemd-networkd[1396]: cali29c7eb1ae3c: Gained carrier Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.490 [INFO][4267] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--x27bc-eth0 coredns-6f6b679f8f- kube-system ffd767f2-9155-40d2-a07b-4a4ed1f5045c 842 0 2024-10-09 07:18:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-x27bc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29c7eb1ae3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.490 [INFO][4267] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.519 [INFO][4298] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" HandleID="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.528 [INFO][4298] ipam_plugin.go 270: Auto assigning IP ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" HandleID="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4c90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-x27bc", "timestamp":"2024-10-09 07:18:34.519481029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.528 [INFO][4298] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.528 [INFO][4298] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.528 [INFO][4298] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.531 [INFO][4298] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.537 [INFO][4298] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.543 [INFO][4298] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.545 [INFO][4298] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.546 [INFO][4298] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.547 [INFO][4298] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.548 [INFO][4298] ipam.go 1685: Creating new handle: k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.553 [INFO][4298] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.560 [INFO][4298] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.560 [INFO][4298] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" host="localhost" Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.560 [INFO][4298] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:34.582943 containerd[1459]: 2024-10-09 07:18:34.560 [INFO][4298] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" HandleID="k8s-pod-network.d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.583506 containerd[1459]: 2024-10-09 07:18:34.563 [INFO][4267] k8s.go 386: Populated endpoint ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--x27bc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ffd767f2-9155-40d2-a07b-4a4ed1f5045c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-x27bc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29c7eb1ae3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:34.583506 containerd[1459]: 2024-10-09 07:18:34.563 [INFO][4267] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.583506 containerd[1459]: 2024-10-09 07:18:34.563 [INFO][4267] dataplane_linux.go 68: Setting the host side veth name to cali29c7eb1ae3c ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.583506 containerd[1459]: 2024-10-09 07:18:34.565 [INFO][4267] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.583506 containerd[1459]: 2024-10-09 07:18:34.566 [INFO][4267] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--x27bc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ffd767f2-9155-40d2-a07b-4a4ed1f5045c", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d", Pod:"coredns-6f6b679f8f-x27bc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29c7eb1ae3c", MAC:"5a:ad:0d:fc:97:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:34.583506 containerd[1459]: 2024-10-09 07:18:34.578 [INFO][4267] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d" Namespace="kube-system" Pod="coredns-6f6b679f8f-x27bc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:34.607812 containerd[1459]: time="2024-10-09T07:18:34.607679381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:34.608011 containerd[1459]: time="2024-10-09T07:18:34.607842327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:34.608011 containerd[1459]: time="2024-10-09T07:18:34.607870520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:34.608011 containerd[1459]: time="2024-10-09T07:18:34.607885117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:34.636920 systemd[1]: Started cri-containerd-d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d.scope - libcontainer container d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d. Oct 9 07:18:34.653384 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:18:34.671473 systemd-networkd[1396]: cali58a47c929e0: Link UP Oct 9 07:18:34.672313 systemd-networkd[1396]: cali58a47c929e0: Gained carrier Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.504 [INFO][4282] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0 calico-kube-controllers-8556d8f995- calico-system 03c7dbdf-59ec-40e3-a4a2-c4039d2f5906 843 0 2024-10-09 07:18:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8556d8f995 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8556d8f995-5lbzv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali58a47c929e0 [] []}} ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.504 [INFO][4282] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.535 [INFO][4304] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" HandleID="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.543 [INFO][4304] ipam_plugin.go 270: Auto assigning IP ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" HandleID="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8556d8f995-5lbzv", "timestamp":"2024-10-09 07:18:34.53500411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.544 [INFO][4304] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.560 [INFO][4304] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.560 [INFO][4304] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.632 [INFO][4304] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.637 [INFO][4304] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.644 [INFO][4304] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.648 [INFO][4304] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.651 [INFO][4304] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.651 [INFO][4304] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.652 [INFO][4304] ipam.go 1685: Creating new handle: k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1 Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.656 [INFO][4304] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.663 [INFO][4304] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.663 [INFO][4304] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" host="localhost" Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.663 [INFO][4304] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:34.685964 containerd[1459]: 2024-10-09 07:18:34.663 [INFO][4304] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" HandleID="k8s-pod-network.04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.686539 containerd[1459]: 2024-10-09 07:18:34.667 [INFO][4282] k8s.go 386: Populated endpoint ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0", GenerateName:"calico-kube-controllers-8556d8f995-", Namespace:"calico-system", SelfLink:"", UID:"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8556d8f995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8556d8f995-5lbzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a47c929e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:34.686539 containerd[1459]: 2024-10-09 07:18:34.667 [INFO][4282] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.686539 containerd[1459]: 2024-10-09 07:18:34.668 [INFO][4282] dataplane_linux.go 68: Setting the host side veth name to cali58a47c929e0 ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.686539 containerd[1459]: 2024-10-09 07:18:34.669 [INFO][4282] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.686539 containerd[1459]: 2024-10-09 07:18:34.670 [INFO][4282] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0", GenerateName:"calico-kube-controllers-8556d8f995-", Namespace:"calico-system", SelfLink:"", UID:"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8556d8f995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1", Pod:"calico-kube-controllers-8556d8f995-5lbzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a47c929e0", MAC:"76:ca:f9:80:62:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:34.686539 containerd[1459]: 2024-10-09 07:18:34.682 [INFO][4282] k8s.go 500: Wrote updated endpoint to datastore ContainerID="04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1" Namespace="calico-system" Pod="calico-kube-controllers-8556d8f995-5lbzv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:34.692743 containerd[1459]: time="2024-10-09T07:18:34.692705626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-x27bc,Uid:ffd767f2-9155-40d2-a07b-4a4ed1f5045c,Namespace:kube-system,Attempt:1,} returns sandbox id \"d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d\"" Oct 9 07:18:34.693688 kubelet[2467]: E1009 07:18:34.693635 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:34.697951 containerd[1459]: time="2024-10-09T07:18:34.697735792Z" level=info msg="CreateContainer within sandbox \"d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:18:34.710440 containerd[1459]: time="2024-10-09T07:18:34.710292023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:18:34.710440 containerd[1459]: time="2024-10-09T07:18:34.710359630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:34.710750 containerd[1459]: time="2024-10-09T07:18:34.710380939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:18:34.710750 containerd[1459]: time="2024-10-09T07:18:34.710424872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:18:34.715142 containerd[1459]: time="2024-10-09T07:18:34.715087919Z" level=info msg="CreateContainer within sandbox \"d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efb3478856e990b2fcddb5f95ffd3c5f4bdad0c5d9d2aa10df1dfb452d6c5178\"" Oct 9 07:18:34.715969 containerd[1459]: time="2024-10-09T07:18:34.715929509Z" level=info msg="StartContainer for \"efb3478856e990b2fcddb5f95ffd3c5f4bdad0c5d9d2aa10df1dfb452d6c5178\"" Oct 9 07:18:34.738134 systemd[1]: Started cri-containerd-04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1.scope - libcontainer container 04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1. Oct 9 07:18:34.745424 systemd[1]: Started cri-containerd-efb3478856e990b2fcddb5f95ffd3c5f4bdad0c5d9d2aa10df1dfb452d6c5178.scope - libcontainer container efb3478856e990b2fcddb5f95ffd3c5f4bdad0c5d9d2aa10df1dfb452d6c5178. Oct 9 07:18:34.753592 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:18:34.877139 containerd[1459]: time="2024-10-09T07:18:34.875967021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8556d8f995-5lbzv,Uid:03c7dbdf-59ec-40e3-a4a2-c4039d2f5906,Namespace:calico-system,Attempt:1,} returns sandbox id \"04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1\"" Oct 9 07:18:34.877139 containerd[1459]: time="2024-10-09T07:18:34.876018298Z" level=info msg="StartContainer for \"efb3478856e990b2fcddb5f95ffd3c5f4bdad0c5d9d2aa10df1dfb452d6c5178\" returns successfully" Oct 9 07:18:35.106337 systemd[1]: run-netns-cni\x2d7c04b7ea\x2df210\x2d2f4a\x2d9055\x2de5cf46dac66d.mount: Deactivated successfully. Oct 9 07:18:35.106518 systemd[1]: run-netns-cni\x2d28049447\x2dae35\x2d3cfa\x2d8c1f\x2d9f72a36581dd.mount: Deactivated successfully. Oct 9 07:18:35.380277 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:39840.service - OpenSSH per-connection server daemon (10.0.0.1:39840). Oct 9 07:18:35.422287 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 39840 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:35.424138 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:35.429510 systemd-logind[1442]: New session 10 of user core. Oct 9 07:18:35.433753 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:18:35.457495 kubelet[2467]: E1009 07:18:35.457437 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:35.473330 kubelet[2467]: I1009 07:18:35.472335 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-x27bc" podStartSLOduration=35.472315327 podStartE2EDuration="35.472315327s" podCreationTimestamp="2024-10-09 07:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:18:35.47096379 +0000 UTC m=+40.229315874" watchObservedRunningTime="2024-10-09 07:18:35.472315327 +0000 UTC m=+40.230667411" Oct 9 07:18:35.718895 sshd[4462]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:35.727779 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:39840.service: Deactivated successfully. Oct 9 07:18:35.730184 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:18:35.731171 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:18:35.739245 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:39842.service - OpenSSH per-connection server daemon (10.0.0.1:39842). Oct 9 07:18:35.741291 systemd-logind[1442]: Removed session 10. Oct 9 07:18:35.768869 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 39842 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:35.771202 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:35.777094 systemd-logind[1442]: New session 11 of user core. Oct 9 07:18:35.786841 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:18:36.158934 sshd[4488]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:36.167735 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:39842.service: Deactivated successfully. Oct 9 07:18:36.169966 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:18:36.171810 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:18:36.177036 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:39858.service - OpenSSH per-connection server daemon (10.0.0.1:39858). Oct 9 07:18:36.177969 systemd-logind[1442]: Removed session 11. Oct 9 07:18:36.186868 systemd-networkd[1396]: cali29c7eb1ae3c: Gained IPv6LL Oct 9 07:18:36.206783 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 39858 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:36.208887 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:36.213371 systemd-logind[1442]: New session 12 of user core. Oct 9 07:18:36.222712 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:18:36.338699 sshd[4500]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:36.343401 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:39858.service: Deactivated successfully. Oct 9 07:18:36.345553 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:18:36.346252 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:18:36.347290 systemd-logind[1442]: Removed session 12. Oct 9 07:18:36.378754 systemd-networkd[1396]: cali58a47c929e0: Gained IPv6LL Oct 9 07:18:36.464359 kubelet[2467]: E1009 07:18:36.464329 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:36.586463 containerd[1459]: time="2024-10-09T07:18:36.586396370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:36.587237 containerd[1459]: time="2024-10-09T07:18:36.587145667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:18:36.588434 containerd[1459]: time="2024-10-09T07:18:36.588373492Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:36.590523 containerd[1459]: time="2024-10-09T07:18:36.590475749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:36.591459 containerd[1459]: time="2024-10-09T07:18:36.591402520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.12464878s" Oct 9 07:18:36.591459 containerd[1459]: time="2024-10-09T07:18:36.591453756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:18:36.592646 containerd[1459]: time="2024-10-09T07:18:36.592600629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:18:36.593824 containerd[1459]: time="2024-10-09T07:18:36.593772179Z" level=info msg="CreateContainer within sandbox \"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:18:36.610707 containerd[1459]: time="2024-10-09T07:18:36.610655369Z" level=info msg="CreateContainer within sandbox \"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"86e9d3dd3efee7414f71e2cf03d42afa977a30a3f7b38fce0699a690361f61cc\"" Oct 9 07:18:36.611637 containerd[1459]: time="2024-10-09T07:18:36.611328993Z" level=info msg="StartContainer for \"86e9d3dd3efee7414f71e2cf03d42afa977a30a3f7b38fce0699a690361f61cc\"" Oct 9 07:18:36.648728 systemd[1]: Started cri-containerd-86e9d3dd3efee7414f71e2cf03d42afa977a30a3f7b38fce0699a690361f61cc.scope - libcontainer container 86e9d3dd3efee7414f71e2cf03d42afa977a30a3f7b38fce0699a690361f61cc. Oct 9 07:18:36.679313 containerd[1459]: time="2024-10-09T07:18:36.679263186Z" level=info msg="StartContainer for \"86e9d3dd3efee7414f71e2cf03d42afa977a30a3f7b38fce0699a690361f61cc\" returns successfully" Oct 9 07:18:37.393632 kubelet[2467]: I1009 07:18:37.393588 2467 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:18:37.393632 kubelet[2467]: I1009 07:18:37.393638 2467 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:18:37.469069 kubelet[2467]: E1009 07:18:37.469031 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:18:37.479604 kubelet[2467]: I1009 07:18:37.479033 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2lpj5" podStartSLOduration=27.508844949 podStartE2EDuration="31.479009827s" podCreationTimestamp="2024-10-09 07:18:06 +0000 UTC" firstStartedPulling="2024-10-09 07:18:32.622188677 +0000 UTC m=+37.380540751" lastFinishedPulling="2024-10-09 07:18:36.592353545 +0000 UTC m=+41.350705629" observedRunningTime="2024-10-09 07:18:37.478788622 +0000 UTC m=+42.237140736" watchObservedRunningTime="2024-10-09 07:18:37.479009827 +0000 UTC m=+42.237361911" Oct 9 07:18:38.510090 containerd[1459]: time="2024-10-09T07:18:38.510004332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:38.510700 containerd[1459]: time="2024-10-09T07:18:38.510631278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:18:38.511855 containerd[1459]: time="2024-10-09T07:18:38.511802548Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:38.514556 containerd[1459]: time="2024-10-09T07:18:38.514497086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:18:38.515102 containerd[1459]: time="2024-10-09T07:18:38.515067486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 1.922427062s" Oct 9 07:18:38.515102 containerd[1459]: time="2024-10-09T07:18:38.515097703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:18:38.525364 containerd[1459]: time="2024-10-09T07:18:38.525320854Z" level=info msg="CreateContainer within sandbox \"04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:18:38.541882 containerd[1459]: time="2024-10-09T07:18:38.541826510Z" level=info msg="CreateContainer within sandbox \"04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c4ccd7809ca530852a265bc351cfc828f0713f442704370145a8610a9c381f02\"" Oct 9 07:18:38.542649 containerd[1459]: time="2024-10-09T07:18:38.542602998Z" level=info msg="StartContainer for \"c4ccd7809ca530852a265bc351cfc828f0713f442704370145a8610a9c381f02\"" Oct 9 07:18:38.578901 systemd[1]: Started cri-containerd-c4ccd7809ca530852a265bc351cfc828f0713f442704370145a8610a9c381f02.scope - libcontainer container c4ccd7809ca530852a265bc351cfc828f0713f442704370145a8610a9c381f02. Oct 9 07:18:38.628763 containerd[1459]: time="2024-10-09T07:18:38.628711418Z" level=info msg="StartContainer for \"c4ccd7809ca530852a265bc351cfc828f0713f442704370145a8610a9c381f02\" returns successfully" Oct 9 07:18:39.489954 kubelet[2467]: I1009 07:18:39.489870 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8556d8f995-5lbzv" podStartSLOduration=29.852239439 podStartE2EDuration="33.489848292s" podCreationTimestamp="2024-10-09 07:18:06 +0000 UTC" firstStartedPulling="2024-10-09 07:18:34.878563668 +0000 UTC m=+39.636915752" lastFinishedPulling="2024-10-09 07:18:38.516172521 +0000 UTC m=+43.274524605" observedRunningTime="2024-10-09 07:18:39.489264386 +0000 UTC m=+44.247616480" watchObservedRunningTime="2024-10-09 07:18:39.489848292 +0000 UTC m=+44.248200376" Oct 9 07:18:41.358154 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:52746.service - OpenSSH per-connection server daemon (10.0.0.1:52746). Oct 9 07:18:41.390595 sshd[4630]: Accepted publickey for core from 10.0.0.1 port 52746 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:41.392173 sshd[4630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:41.396307 systemd-logind[1442]: New session 13 of user core. Oct 9 07:18:41.407792 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:18:41.528898 sshd[4630]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:41.533238 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:52746.service: Deactivated successfully. Oct 9 07:18:41.535299 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:18:41.535910 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:18:41.536885 systemd-logind[1442]: Removed session 13. Oct 9 07:18:46.542646 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:52754.service - OpenSSH per-connection server daemon (10.0.0.1:52754). Oct 9 07:18:46.575912 sshd[4652]: Accepted publickey for core from 10.0.0.1 port 52754 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:46.577925 sshd[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:46.582371 systemd-logind[1442]: New session 14 of user core. Oct 9 07:18:46.591734 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:18:46.706546 sshd[4652]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:46.711311 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:52754.service: Deactivated successfully. Oct 9 07:18:46.713717 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:18:46.714402 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:18:46.715724 systemd-logind[1442]: Removed session 14. Oct 9 07:18:51.718686 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:51978.service - OpenSSH per-connection server daemon (10.0.0.1:51978). Oct 9 07:18:51.755342 sshd[4678]: Accepted publickey for core from 10.0.0.1 port 51978 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:51.756972 sshd[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:51.761060 systemd-logind[1442]: New session 15 of user core. Oct 9 07:18:51.771718 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:18:51.885024 sshd[4678]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:51.889257 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:51978.service: Deactivated successfully. Oct 9 07:18:51.891668 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:18:51.892343 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:18:51.893222 systemd-logind[1442]: Removed session 15. Oct 9 07:18:55.323100 containerd[1459]: time="2024-10-09T07:18:55.323048787Z" level=info msg="StopPodSandbox for \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\"" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.359 [WARNING][4729] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--x27bc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ffd767f2-9155-40d2-a07b-4a4ed1f5045c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d", Pod:"coredns-6f6b679f8f-x27bc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29c7eb1ae3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.359 [INFO][4729] k8s.go 608: Cleaning up netns ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.359 [INFO][4729] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" iface="eth0" netns="" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.360 [INFO][4729] k8s.go 615: Releasing IP address(es) ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.360 [INFO][4729] utils.go 188: Calico CNI releasing IP address ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.382 [INFO][4737] ipam_plugin.go 417: Releasing address using handleID ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.382 [INFO][4737] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.382 [INFO][4737] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.388 [WARNING][4737] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.388 [INFO][4737] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.390 [INFO][4737] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:55.395821 containerd[1459]: 2024-10-09 07:18:55.393 [INFO][4729] k8s.go 621: Teardown processing complete. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.396402 containerd[1459]: time="2024-10-09T07:18:55.395869664Z" level=info msg="TearDown network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\" successfully" Oct 9 07:18:55.396402 containerd[1459]: time="2024-10-09T07:18:55.395906405Z" level=info msg="StopPodSandbox for \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\" returns successfully" Oct 9 07:18:55.396666 containerd[1459]: time="2024-10-09T07:18:55.396636298Z" level=info msg="RemovePodSandbox for \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\"" Oct 9 07:18:55.398859 containerd[1459]: time="2024-10-09T07:18:55.398832933Z" level=info msg="Forcibly stopping sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\"" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.442 [WARNING][4759] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--x27bc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ffd767f2-9155-40d2-a07b-4a4ed1f5045c", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d162d4d6dcf5337e9add8d7b89da56c153a713277ac5cb8f288100bf37fa6e6d", Pod:"coredns-6f6b679f8f-x27bc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29c7eb1ae3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.442 [INFO][4759] k8s.go 608: Cleaning up netns ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.442 [INFO][4759] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" iface="eth0" netns="" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.442 [INFO][4759] k8s.go 615: Releasing IP address(es) ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.442 [INFO][4759] utils.go 188: Calico CNI releasing IP address ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.463 [INFO][4767] ipam_plugin.go 417: Releasing address using handleID ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.463 [INFO][4767] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.463 [INFO][4767] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.468 [WARNING][4767] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.468 [INFO][4767] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" HandleID="k8s-pod-network.3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Workload="localhost-k8s-coredns--6f6b679f8f--x27bc-eth0" Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.470 [INFO][4767] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:55.475172 containerd[1459]: 2024-10-09 07:18:55.472 [INFO][4759] k8s.go 621: Teardown processing complete. ContainerID="3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b" Oct 9 07:18:55.475701 containerd[1459]: time="2024-10-09T07:18:55.475225707Z" level=info msg="TearDown network for sandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\" successfully" Oct 9 07:18:55.508607 containerd[1459]: time="2024-10-09T07:18:55.508514274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:55.508766 containerd[1459]: time="2024-10-09T07:18:55.508674274Z" level=info msg="RemovePodSandbox \"3cc17cc5d771aed8b38927648d05a58338ff3c91cd388fae652c9185a2ddf03b\" returns successfully" Oct 9 07:18:55.509492 containerd[1459]: time="2024-10-09T07:18:55.509429537Z" level=info msg="StopPodSandbox for \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\"" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.554 [WARNING][4789] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ec9cda30-1f27-419f-b930-a288ad96787d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6", Pod:"coredns-6f6b679f8f-zr4gk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ded891298", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.554 [INFO][4789] k8s.go 608: Cleaning up netns ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.554 [INFO][4789] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" iface="eth0" netns="" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.554 [INFO][4789] k8s.go 615: Releasing IP address(es) ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.554 [INFO][4789] utils.go 188: Calico CNI releasing IP address ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.585 [INFO][4796] ipam_plugin.go 417: Releasing address using handleID ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.585 [INFO][4796] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.585 [INFO][4796] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.591 [WARNING][4796] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.591 [INFO][4796] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.592 [INFO][4796] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:55.597334 containerd[1459]: 2024-10-09 07:18:55.594 [INFO][4789] k8s.go 621: Teardown processing complete. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.597334 containerd[1459]: time="2024-10-09T07:18:55.597293627Z" level=info msg="TearDown network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\" successfully" Oct 9 07:18:55.597334 containerd[1459]: time="2024-10-09T07:18:55.597328805Z" level=info msg="StopPodSandbox for \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\" returns successfully" Oct 9 07:18:55.598388 containerd[1459]: time="2024-10-09T07:18:55.598288894Z" level=info msg="RemovePodSandbox for \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\"" Oct 9 07:18:55.598444 containerd[1459]: time="2024-10-09T07:18:55.598397926Z" level=info msg="Forcibly stopping sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\"" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.632 [WARNING][4818] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ec9cda30-1f27-419f-b930-a288ad96787d", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f779cac2bfcf296cd0ea4280170e54146877059169ba31f392487834808955a6", Pod:"coredns-6f6b679f8f-zr4gk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ded891298", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.632 [INFO][4818] k8s.go 608: Cleaning up netns ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.632 [INFO][4818] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" iface="eth0" netns="" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.632 [INFO][4818] k8s.go 615: Releasing IP address(es) ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.632 [INFO][4818] utils.go 188: Calico CNI releasing IP address ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.651 [INFO][4826] ipam_plugin.go 417: Releasing address using handleID ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.651 [INFO][4826] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.651 [INFO][4826] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.656 [WARNING][4826] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.656 [INFO][4826] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" HandleID="k8s-pod-network.ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Workload="localhost-k8s-coredns--6f6b679f8f--zr4gk-eth0" Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.658 [INFO][4826] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:55.663822 containerd[1459]: 2024-10-09 07:18:55.661 [INFO][4818] k8s.go 621: Teardown processing complete. ContainerID="ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1" Oct 9 07:18:55.664285 containerd[1459]: time="2024-10-09T07:18:55.663866363Z" level=info msg="TearDown network for sandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\" successfully" Oct 9 07:18:55.755626 containerd[1459]: time="2024-10-09T07:18:55.755478513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:55.755801 containerd[1459]: time="2024-10-09T07:18:55.755671426Z" level=info msg="RemovePodSandbox \"ad777a7950f0e7f42c02edd42e380755f40de4bdec3cdb0947565e341e558ba1\" returns successfully" Oct 9 07:18:55.756473 containerd[1459]: time="2024-10-09T07:18:55.756403344Z" level=info msg="StopPodSandbox for \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\"" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.793 [WARNING][4848] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lpj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5964cbe6-abee-481b-af82-9fee53352333", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141", Pod:"csi-node-driver-2lpj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c20a384ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.793 [INFO][4848] k8s.go 608: Cleaning up netns ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.793 [INFO][4848] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" iface="eth0" netns="" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.793 [INFO][4848] k8s.go 615: Releasing IP address(es) ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.793 [INFO][4848] utils.go 188: Calico CNI releasing IP address ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.813 [INFO][4856] ipam_plugin.go 417: Releasing address using handleID ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.813 [INFO][4856] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.813 [INFO][4856] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.819 [WARNING][4856] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.819 [INFO][4856] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.821 [INFO][4856] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:55.826286 containerd[1459]: 2024-10-09 07:18:55.823 [INFO][4848] k8s.go 621: Teardown processing complete. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.826883 containerd[1459]: time="2024-10-09T07:18:55.826332531Z" level=info msg="TearDown network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\" successfully" Oct 9 07:18:55.826883 containerd[1459]: time="2024-10-09T07:18:55.826361867Z" level=info msg="StopPodSandbox for \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\" returns successfully" Oct 9 07:18:55.827019 containerd[1459]: time="2024-10-09T07:18:55.826976687Z" level=info msg="RemovePodSandbox for \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\"" Oct 9 07:18:55.827053 containerd[1459]: time="2024-10-09T07:18:55.827023328Z" level=info msg="Forcibly stopping sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\"" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.867 [WARNING][4880] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2lpj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5964cbe6-abee-481b-af82-9fee53352333", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ac0342d8b9dbfbfc9418ef9cda4fa2a309df38542f524455f3f46b04fd16141", Pod:"csi-node-driver-2lpj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c20a384ca3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.867 [INFO][4880] k8s.go 608: Cleaning up netns ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.867 [INFO][4880] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" iface="eth0" netns="" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.867 [INFO][4880] k8s.go 615: Releasing IP address(es) ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.867 [INFO][4880] utils.go 188: Calico CNI releasing IP address ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.888 [INFO][4889] ipam_plugin.go 417: Releasing address using handleID ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.888 [INFO][4889] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.888 [INFO][4889] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.894 [WARNING][4889] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.894 [INFO][4889] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" HandleID="k8s-pod-network.cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Workload="localhost-k8s-csi--node--driver--2lpj5-eth0" Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.896 [INFO][4889] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:55.901026 containerd[1459]: 2024-10-09 07:18:55.898 [INFO][4880] k8s.go 621: Teardown processing complete. ContainerID="cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33" Oct 9 07:18:55.901026 containerd[1459]: time="2024-10-09T07:18:55.900985345Z" level=info msg="TearDown network for sandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\" successfully" Oct 9 07:18:55.904981 containerd[1459]: time="2024-10-09T07:18:55.904948841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:55.905040 containerd[1459]: time="2024-10-09T07:18:55.905009969Z" level=info msg="RemovePodSandbox \"cf74dd0ab874cb3c9db00b39115b6838be7d810ada66f824f1c63912d6493e33\" returns successfully" Oct 9 07:18:55.905618 containerd[1459]: time="2024-10-09T07:18:55.905567018Z" level=info msg="StopPodSandbox for \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\"" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.942 [WARNING][4912] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0", GenerateName:"calico-kube-controllers-8556d8f995-", Namespace:"calico-system", SelfLink:"", UID:"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8556d8f995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1", Pod:"calico-kube-controllers-8556d8f995-5lbzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a47c929e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.942 [INFO][4912] k8s.go 608: Cleaning up netns ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.942 [INFO][4912] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" iface="eth0" netns="" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.942 [INFO][4912] k8s.go 615: Releasing IP address(es) ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.943 [INFO][4912] utils.go 188: Calico CNI releasing IP address ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.962 [INFO][4920] ipam_plugin.go 417: Releasing address using handleID ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.962 [INFO][4920] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.962 [INFO][4920] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.967 [WARNING][4920] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.967 [INFO][4920] ipam_plugin.go 445: Releasing address using workloadID ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.969 [INFO][4920] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:55.973822 containerd[1459]: 2024-10-09 07:18:55.971 [INFO][4912] k8s.go 621: Teardown processing complete. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:55.974259 containerd[1459]: time="2024-10-09T07:18:55.973864235Z" level=info msg="TearDown network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\" successfully" Oct 9 07:18:55.974259 containerd[1459]: time="2024-10-09T07:18:55.973893561Z" level=info msg="StopPodSandbox for \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\" returns successfully" Oct 9 07:18:55.974707 containerd[1459]: time="2024-10-09T07:18:55.974482822Z" level=info msg="RemovePodSandbox for \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\"" Oct 9 07:18:55.974707 containerd[1459]: time="2024-10-09T07:18:55.974538650Z" level=info msg="Forcibly stopping sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\"" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.012 [WARNING][4942] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0", GenerateName:"calico-kube-controllers-8556d8f995-", Namespace:"calico-system", SelfLink:"", UID:"03c7dbdf-59ec-40e3-a4a2-c4039d2f5906", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8556d8f995", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04b096134a42a5cbce13cd1a6d8368c376d3aeb9e49121880e6303a552ed29a1", Pod:"calico-kube-controllers-8556d8f995-5lbzv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58a47c929e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.012 [INFO][4942] k8s.go 608: Cleaning up netns ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.012 [INFO][4942] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" iface="eth0" netns="" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.012 [INFO][4942] k8s.go 615: Releasing IP address(es) ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.012 [INFO][4942] utils.go 188: Calico CNI releasing IP address ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.031 [INFO][4950] ipam_plugin.go 417: Releasing address using handleID ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.032 [INFO][4950] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.032 [INFO][4950] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.037 [WARNING][4950] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.037 [INFO][4950] ipam_plugin.go 445: Releasing address using workloadID ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" HandleID="k8s-pod-network.23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Workload="localhost-k8s-calico--kube--controllers--8556d8f995--5lbzv-eth0" Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.039 [INFO][4950] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:18:56.044035 containerd[1459]: 2024-10-09 07:18:56.041 [INFO][4942] k8s.go 621: Teardown processing complete. ContainerID="23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663" Oct 9 07:18:56.044483 containerd[1459]: time="2024-10-09T07:18:56.044081347Z" level=info msg="TearDown network for sandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\" successfully" Oct 9 07:18:56.047719 containerd[1459]: time="2024-10-09T07:18:56.047686432Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:18:56.047781 containerd[1459]: time="2024-10-09T07:18:56.047737320Z" level=info msg="RemovePodSandbox \"23c92dc7fa648d99d3a9416176627c9ea2cd106e7c71a0581f227aaeeab28663\" returns successfully" Oct 9 07:18:56.897970 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:51992.service - OpenSSH per-connection server daemon (10.0.0.1:51992). Oct 9 07:18:56.957016 sshd[4978]: Accepted publickey for core from 10.0.0.1 port 51992 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:56.959110 sshd[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:56.963621 systemd-logind[1442]: New session 16 of user core. Oct 9 07:18:56.974718 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:18:57.095566 sshd[4978]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:57.107655 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:51992.service: Deactivated successfully. Oct 9 07:18:57.109720 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:18:57.111495 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:18:57.116993 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:42606.service - OpenSSH per-connection server daemon (10.0.0.1:42606). Oct 9 07:18:57.118122 systemd-logind[1442]: Removed session 16. Oct 9 07:18:57.145545 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 42606 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:57.147781 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:57.152469 systemd-logind[1442]: New session 17 of user core. Oct 9 07:18:57.159776 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:18:57.425246 sshd[4993]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:57.442997 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:42606.service: Deactivated successfully. Oct 9 07:18:57.446163 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:18:57.449134 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:18:57.458876 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:42608.service - OpenSSH per-connection server daemon (10.0.0.1:42608). Oct 9 07:18:57.460091 systemd-logind[1442]: Removed session 17. Oct 9 07:18:57.493139 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 42608 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:57.495117 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:57.500163 systemd-logind[1442]: New session 18 of user core. Oct 9 07:18:57.511746 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:18:59.052481 sshd[5006]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:59.062625 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:42608.service: Deactivated successfully. Oct 9 07:18:59.065959 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:18:59.069529 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:18:59.078784 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:42624.service - OpenSSH per-connection server daemon (10.0.0.1:42624). Oct 9 07:18:59.080503 systemd-logind[1442]: Removed session 18. Oct 9 07:18:59.108334 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 42624 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:59.110005 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:59.114390 systemd-logind[1442]: New session 19 of user core. Oct 9 07:18:59.122726 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:18:59.347426 sshd[5030]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:59.357919 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:42624.service: Deactivated successfully. Oct 9 07:18:59.360790 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:18:59.362706 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:18:59.368903 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:42634.service - OpenSSH per-connection server daemon (10.0.0.1:42634). Oct 9 07:18:59.370062 systemd-logind[1442]: Removed session 19. Oct 9 07:18:59.397492 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 42634 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:18:59.399538 sshd[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:18:59.404422 systemd-logind[1442]: New session 20 of user core. Oct 9 07:18:59.411907 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:18:59.527630 sshd[5042]: pam_unix(sshd:session): session closed for user core Oct 9 07:18:59.533176 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:42634.service: Deactivated successfully. Oct 9 07:18:59.535669 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:18:59.536310 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:18:59.537413 systemd-logind[1442]: Removed session 20. Oct 9 07:19:04.544117 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:42650.service - OpenSSH per-connection server daemon (10.0.0.1:42650). Oct 9 07:19:04.575147 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 42650 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:19:04.576809 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:04.580823 systemd-logind[1442]: New session 21 of user core. Oct 9 07:19:04.594708 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:19:04.702098 sshd[5080]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:04.707012 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:42650.service: Deactivated successfully. Oct 9 07:19:04.710018 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:19:04.710763 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:19:04.711822 systemd-logind[1442]: Removed session 21. Oct 9 07:19:09.717718 systemd[1]: Started sshd@21-10.0.0.70:22-10.0.0.1:43632.service - OpenSSH per-connection server daemon (10.0.0.1:43632). Oct 9 07:19:09.749718 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 43632 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:19:09.751497 sshd[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:09.756104 systemd-logind[1442]: New session 22 of user core. Oct 9 07:19:09.761745 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:19:09.866394 sshd[5109]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:09.871034 systemd[1]: sshd@21-10.0.0.70:22-10.0.0.1:43632.service: Deactivated successfully. Oct 9 07:19:09.874180 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:19:09.875258 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:19:09.876110 systemd-logind[1442]: Removed session 22. Oct 9 07:19:10.326536 kubelet[2467]: E1009 07:19:10.326467 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:19:10.326536 kubelet[2467]: E1009 07:19:10.326516 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:19:11.218901 systemd[1]: Created slice kubepods-besteffort-pod7a555863_02c1_4214_bb75_f90de83a8ae5.slice - libcontainer container kubepods-besteffort-pod7a555863_02c1_4214_bb75_f90de83a8ae5.slice. Oct 9 07:19:11.387602 kubelet[2467]: I1009 07:19:11.387503 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a555863-02c1-4214-bb75-f90de83a8ae5-calico-apiserver-certs\") pod \"calico-apiserver-bfd5d954c-87dw8\" (UID: \"7a555863-02c1-4214-bb75-f90de83a8ae5\") " pod="calico-apiserver/calico-apiserver-bfd5d954c-87dw8" Oct 9 07:19:11.387602 kubelet[2467]: I1009 07:19:11.387563 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdr8t\" (UniqueName: \"kubernetes.io/projected/7a555863-02c1-4214-bb75-f90de83a8ae5-kube-api-access-tdr8t\") pod \"calico-apiserver-bfd5d954c-87dw8\" (UID: \"7a555863-02c1-4214-bb75-f90de83a8ae5\") " pod="calico-apiserver/calico-apiserver-bfd5d954c-87dw8" Oct 9 07:19:11.488468 kubelet[2467]: E1009 07:19:11.488304 2467 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:19:11.488468 kubelet[2467]: E1009 07:19:11.488413 2467 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a555863-02c1-4214-bb75-f90de83a8ae5-calico-apiserver-certs podName:7a555863-02c1-4214-bb75-f90de83a8ae5 nodeName:}" failed. No retries permitted until 2024-10-09 07:19:11.98839381 +0000 UTC m=+76.746745894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/7a555863-02c1-4214-bb75-f90de83a8ae5-calico-apiserver-certs") pod "calico-apiserver-bfd5d954c-87dw8" (UID: "7a555863-02c1-4214-bb75-f90de83a8ae5") : secret "calico-apiserver-certs" not found Oct 9 07:19:11.991715 kubelet[2467]: E1009 07:19:11.991647 2467 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:19:11.991877 kubelet[2467]: E1009 07:19:11.991740 2467 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a555863-02c1-4214-bb75-f90de83a8ae5-calico-apiserver-certs podName:7a555863-02c1-4214-bb75-f90de83a8ae5 nodeName:}" failed. No retries permitted until 2024-10-09 07:19:12.991724512 +0000 UTC m=+77.750076596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/7a555863-02c1-4214-bb75-f90de83a8ae5-calico-apiserver-certs") pod "calico-apiserver-bfd5d954c-87dw8" (UID: "7a555863-02c1-4214-bb75-f90de83a8ae5") : secret "calico-apiserver-certs" not found Oct 9 07:19:13.025562 containerd[1459]: time="2024-10-09T07:19:13.025510508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfd5d954c-87dw8,Uid:7a555863-02c1-4214-bb75-f90de83a8ae5,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:19:13.225013 systemd-networkd[1396]: calidd989a780d4: Link UP Oct 9 07:19:13.225229 systemd-networkd[1396]: calidd989a780d4: Gained carrier Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.066 [INFO][5128] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0 calico-apiserver-bfd5d954c- calico-apiserver 7a555863-02c1-4214-bb75-f90de83a8ae5 1141 0 2024-10-09 07:19:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bfd5d954c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bfd5d954c-87dw8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd989a780d4 [] []}} ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.066 [INFO][5128] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.092 [INFO][5141] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" HandleID="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Workload="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.100 [INFO][5141] ipam_plugin.go 270: Auto assigning IP ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" HandleID="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Workload="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bfd5d954c-87dw8", "timestamp":"2024-10-09 07:19:13.092757207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.100 [INFO][5141] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.100 [INFO][5141] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.100 [INFO][5141] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.101 [INFO][5141] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.106 [INFO][5141] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.201 [INFO][5141] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.203 [INFO][5141] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.205 [INFO][5141] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.205 [INFO][5141] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.206 [INFO][5141] ipam.go 1685: Creating new handle: k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.212 [INFO][5141] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.218 [INFO][5141] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.219 [INFO][5141] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" host="localhost" Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.219 [INFO][5141] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:19:13.238487 containerd[1459]: 2024-10-09 07:19:13.219 [INFO][5141] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" HandleID="k8s-pod-network.ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Workload="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" Oct 9 07:19:13.239028 containerd[1459]: 2024-10-09 07:19:13.222 [INFO][5128] k8s.go 386: Populated endpoint ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0", GenerateName:"calico-apiserver-bfd5d954c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a555863-02c1-4214-bb75-f90de83a8ae5", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 19, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfd5d954c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bfd5d954c-87dw8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd989a780d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:19:13.239028 containerd[1459]: 2024-10-09 07:19:13.222 [INFO][5128] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" Oct 9 07:19:13.239028 containerd[1459]: 2024-10-09 07:19:13.222 [INFO][5128] dataplane_linux.go 68: Setting the host side veth name to calidd989a780d4 ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" Oct 9 07:19:13.239028 containerd[1459]: 2024-10-09 07:19:13.224 [INFO][5128] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" Oct 9 07:19:13.239028 containerd[1459]: 2024-10-09 07:19:13.224 [INFO][5128] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0", GenerateName:"calico-apiserver-bfd5d954c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a555863-02c1-4214-bb75-f90de83a8ae5", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 19, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfd5d954c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a", Pod:"calico-apiserver-bfd5d954c-87dw8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd989a780d4", MAC:"46:84:65:19:21:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:19:13.239028 containerd[1459]: 2024-10-09 07:19:13.232 [INFO][5128] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a" Namespace="calico-apiserver" Pod="calico-apiserver-bfd5d954c-87dw8" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfd5d954c--87dw8-eth0" Oct 9 07:19:13.258298 containerd[1459]: time="2024-10-09T07:19:13.258175065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:19:13.258298 containerd[1459]: time="2024-10-09T07:19:13.258241552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:19:13.258298 containerd[1459]: time="2024-10-09T07:19:13.258263714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:19:13.258298 containerd[1459]: time="2024-10-09T07:19:13.258277791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:19:13.284732 systemd[1]: Started cri-containerd-ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a.scope - libcontainer container ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a. Oct 9 07:19:13.297607 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 07:19:13.324970 containerd[1459]: time="2024-10-09T07:19:13.324917139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfd5d954c-87dw8,Uid:7a555863-02c1-4214-bb75-f90de83a8ae5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a\"" Oct 9 07:19:13.326570 containerd[1459]: time="2024-10-09T07:19:13.326527370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:19:14.894235 systemd[1]: Started sshd@22-10.0.0.70:22-10.0.0.1:43634.service - OpenSSH per-connection server daemon (10.0.0.1:43634). Oct 9 07:19:14.930919 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 43634 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:19:14.933238 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:14.939096 systemd-logind[1442]: New session 23 of user core. Oct 9 07:19:14.945789 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:19:15.034011 systemd-networkd[1396]: calidd989a780d4: Gained IPv6LL Oct 9 07:19:15.131277 sshd[5209]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:15.138734 systemd[1]: sshd@22-10.0.0.70:22-10.0.0.1:43634.service: Deactivated successfully. Oct 9 07:19:15.141336 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:19:15.142962 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:19:15.144156 systemd-logind[1442]: Removed session 23. Oct 9 07:19:15.668963 containerd[1459]: time="2024-10-09T07:19:15.668894389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:15.669691 containerd[1459]: time="2024-10-09T07:19:15.669629725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:19:15.670987 containerd[1459]: time="2024-10-09T07:19:15.670910623Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:15.674923 containerd[1459]: time="2024-10-09T07:19:15.674869088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:19:15.675703 containerd[1459]: time="2024-10-09T07:19:15.675668475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.34909655s" Oct 9 07:19:15.675770 containerd[1459]: time="2024-10-09T07:19:15.675703572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:19:15.678088 containerd[1459]: time="2024-10-09T07:19:15.678041851Z" level=info msg="CreateContainer within sandbox \"ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:19:15.695141 containerd[1459]: time="2024-10-09T07:19:15.695074361Z" level=info msg="CreateContainer within sandbox \"ad526dffa57b50641157e2512e3f7974bc431e117c5fcb9ef0c9407618aff91a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"db640c3ca6b4b5616096c2597db72281a9abbd3c057e66849801d2240b161fa2\"" Oct 9 07:19:15.695843 containerd[1459]: time="2024-10-09T07:19:15.695767987Z" level=info msg="StartContainer for \"db640c3ca6b4b5616096c2597db72281a9abbd3c057e66849801d2240b161fa2\"" Oct 9 07:19:15.732838 systemd[1]: Started cri-containerd-db640c3ca6b4b5616096c2597db72281a9abbd3c057e66849801d2240b161fa2.scope - libcontainer container db640c3ca6b4b5616096c2597db72281a9abbd3c057e66849801d2240b161fa2. Oct 9 07:19:15.780519 containerd[1459]: time="2024-10-09T07:19:15.780465730Z" level=info msg="StartContainer for \"db640c3ca6b4b5616096c2597db72281a9abbd3c057e66849801d2240b161fa2\" returns successfully" Oct 9 07:19:16.595951 kubelet[2467]: I1009 07:19:16.595849 2467 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bfd5d954c-87dw8" podStartSLOduration=3.24532295 podStartE2EDuration="5.595815039s" podCreationTimestamp="2024-10-09 07:19:11 +0000 UTC" firstStartedPulling="2024-10-09 07:19:13.326112836 +0000 UTC m=+78.084464920" lastFinishedPulling="2024-10-09 07:19:15.676604925 +0000 UTC m=+80.434957009" observedRunningTime="2024-10-09 07:19:16.585383884 +0000 UTC m=+81.343735978" watchObservedRunningTime="2024-10-09 07:19:16.595815039 +0000 UTC m=+81.354167133" Oct 9 07:19:17.327385 kubelet[2467]: E1009 07:19:17.327312 2467 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 07:19:20.150474 systemd[1]: Started sshd@23-10.0.0.70:22-10.0.0.1:33288.service - OpenSSH per-connection server daemon (10.0.0.1:33288). Oct 9 07:19:20.188857 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 33288 ssh2: RSA SHA256:QXYNuVrsZeWVZ//RuUq1vxYb0xIs9XA8/7zPDOlSIEs Oct 9 07:19:20.190548 sshd[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:19:20.195423 systemd-logind[1442]: New session 24 of user core. Oct 9 07:19:20.210782 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:19:20.363532 sshd[5279]: pam_unix(sshd:session): session closed for user core Oct 9 07:19:20.368485 systemd[1]: sshd@23-10.0.0.70:22-10.0.0.1:33288.service: Deactivated successfully. Oct 9 07:19:20.371708 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:19:20.372422 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:19:20.373507 systemd-logind[1442]: Removed session 24.