Aug 5 22:20:27.955739 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 5 22:20:27.955759 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:20:27.955769 kernel: BIOS-provided physical RAM map: Aug 5 22:20:27.955776 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 5 22:20:27.955782 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Aug 5 22:20:27.955788 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Aug 5 22:20:27.955795 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Aug 5 22:20:27.955801 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Aug 5 22:20:27.955807 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Aug 5 22:20:27.955813 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Aug 5 22:20:27.955822 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Aug 5 22:20:27.955828 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Aug 5 22:20:27.955834 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Aug 5 22:20:27.955840 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Aug 5 22:20:27.955848 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Aug 5 22:20:27.955857 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Aug 5 22:20:27.955863 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Aug 5 22:20:27.955870 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Aug 5 22:20:27.955876 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Aug 5 22:20:27.955883 kernel: NX (Execute Disable) protection: active Aug 5 22:20:27.955890 kernel: APIC: Static calls initialized Aug 5 22:20:27.955896 kernel: efi: EFI v2.7 by EDK II Aug 5 22:20:27.955903 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4ee018 Aug 5 22:20:27.955909 kernel: SMBIOS 2.8 present. Aug 5 22:20:27.955916 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Aug 5 22:20:27.955922 kernel: Hypervisor detected: KVM Aug 5 22:20:27.955929 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:20:27.955938 kernel: kvm-clock: using sched offset of 5271674706 cycles Aug 5 22:20:27.955945 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:20:27.955952 kernel: tsc: Detected 2794.750 MHz processor Aug 5 22:20:27.955959 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:20:27.955966 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:20:27.955973 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Aug 5 22:20:27.955980 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 5 22:20:27.955987 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:20:27.955993 kernel: Using GB pages for direct mapping Aug 5 22:20:27.956002 kernel: Secure boot disabled Aug 5 22:20:27.956009 kernel: ACPI: Early table checksum verification disabled Aug 5 22:20:27.956016 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Aug 5 22:20:27.956023 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Aug 5 22:20:27.956033 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:20:27.956040 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:20:27.956049 kernel: ACPI: FACS 0x000000009CBDD000 000040 Aug 5 22:20:27.956056 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:20:27.956064 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:20:27.956071 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:20:27.956078 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 5 22:20:27.956085 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Aug 5 22:20:27.956092 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Aug 5 22:20:27.956099 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Aug 5 22:20:27.956108 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Aug 5 22:20:27.956115 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Aug 5 22:20:27.956122 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Aug 5 22:20:27.956129 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Aug 5 22:20:27.956136 kernel: No NUMA configuration found Aug 5 22:20:27.956143 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Aug 5 22:20:27.956151 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Aug 5 22:20:27.956158 kernel: Zone ranges: Aug 5 22:20:27.956165 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:20:27.956174 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Aug 5 22:20:27.956181 kernel: Normal empty Aug 5 22:20:27.956188 kernel: Movable zone start for each node Aug 5 22:20:27.956195 kernel: Early memory node ranges Aug 5 22:20:27.956202 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 5 22:20:27.956209 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Aug 5 22:20:27.956216 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Aug 5 22:20:27.956223 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Aug 5 22:20:27.956230 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Aug 5 22:20:27.956237 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Aug 5 22:20:27.956246 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Aug 5 22:20:27.956254 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:20:27.956261 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 5 22:20:27.956278 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Aug 5 22:20:27.956286 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:20:27.956293 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Aug 5 22:20:27.956300 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 5 22:20:27.956318 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Aug 5 22:20:27.956325 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:20:27.956335 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:20:27.956342 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:20:27.956349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 5 22:20:27.956356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:20:27.956373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:20:27.956380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:20:27.956387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:20:27.956395 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:20:27.956402 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:20:27.956411 kernel: TSC deadline timer available Aug 5 22:20:27.956419 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 5 22:20:27.956426 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:20:27.956433 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 5 22:20:27.956440 kernel: kvm-guest: setup PV sched yield Aug 5 22:20:27.956447 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Aug 5 22:20:27.956454 kernel: Booting paravirtualized kernel on KVM Aug 5 22:20:27.956461 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:20:27.956469 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 5 22:20:27.956478 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Aug 5 22:20:27.956485 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Aug 5 22:20:27.956492 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 5 22:20:27.956499 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:20:27.956506 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:20:27.956515 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:20:27.956522 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:20:27.956529 kernel: random: crng init done Aug 5 22:20:27.956536 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:20:27.956546 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:20:27.956553 kernel: Fallback order for Node 0: 0 Aug 5 22:20:27.956560 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Aug 5 22:20:27.956567 kernel: Policy zone: DMA32 Aug 5 22:20:27.956574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:20:27.956582 kernel: Memory: 2388156K/2567000K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 178584K reserved, 0K cma-reserved) Aug 5 22:20:27.956589 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:20:27.956596 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:20:27.956605 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:20:27.956612 kernel: Dynamic Preempt: voluntary Aug 5 22:20:27.956620 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:20:27.956627 kernel: rcu: RCU event tracing is enabled. Aug 5 22:20:27.956635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:20:27.956649 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:20:27.956659 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:20:27.956666 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:20:27.956674 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:20:27.956681 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:20:27.956688 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 5 22:20:27.956696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:20:27.956705 kernel: Console: colour dummy device 80x25 Aug 5 22:20:27.956713 kernel: printk: console [ttyS0] enabled Aug 5 22:20:27.956720 kernel: ACPI: Core revision 20230628 Aug 5 22:20:27.956728 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 5 22:20:27.956735 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:20:27.956745 kernel: x2apic enabled Aug 5 22:20:27.956753 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:20:27.956760 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 5 22:20:27.956768 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 5 22:20:27.956775 kernel: kvm-guest: setup PV IPIs Aug 5 22:20:27.956782 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 5 22:20:27.956790 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 5 22:20:27.956797 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 5 22:20:27.956805 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 5 22:20:27.956814 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 5 22:20:27.956822 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 5 22:20:27.956829 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:20:27.956837 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:20:27.956844 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:20:27.956852 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:20:27.956859 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 5 22:20:27.956866 kernel: RETBleed: Mitigation: untrained return thunk Aug 5 22:20:27.956874 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 5 22:20:27.956884 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 5 22:20:27.956891 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 5 22:20:27.956899 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 5 22:20:27.956907 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 5 22:20:27.956914 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:20:27.956922 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:20:27.956929 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:20:27.956937 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:20:27.956944 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 5 22:20:27.956954 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:20:27.956963 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:20:27.956972 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:20:27.956980 kernel: SELinux: Initializing. Aug 5 22:20:27.956989 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:20:27.956997 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:20:27.957004 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 5 22:20:27.957012 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:20:27.957021 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:20:27.957029 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:20:27.957036 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 5 22:20:27.957043 kernel: ... version: 0 Aug 5 22:20:27.957051 kernel: ... bit width: 48 Aug 5 22:20:27.957058 kernel: ... generic registers: 6 Aug 5 22:20:27.957065 kernel: ... value mask: 0000ffffffffffff Aug 5 22:20:27.957073 kernel: ... max period: 00007fffffffffff Aug 5 22:20:27.957080 kernel: ... fixed-purpose events: 0 Aug 5 22:20:27.957090 kernel: ... event mask: 000000000000003f Aug 5 22:20:27.957097 kernel: signal: max sigframe size: 1776 Aug 5 22:20:27.957105 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:20:27.957112 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:20:27.957119 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:20:27.957127 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:20:27.957134 kernel: .... node #0, CPUs: #1 #2 #3 Aug 5 22:20:27.957141 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:20:27.957149 kernel: smpboot: Max logical packages: 1 Aug 5 22:20:27.957156 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 5 22:20:27.957166 kernel: devtmpfs: initialized Aug 5 22:20:27.957173 kernel: x86/mm: Memory block size: 128MB Aug 5 22:20:27.957181 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Aug 5 22:20:27.957188 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Aug 5 22:20:27.957196 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Aug 5 22:20:27.957203 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Aug 5 22:20:27.957211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Aug 5 22:20:27.957219 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:20:27.957226 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:20:27.957236 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:20:27.957243 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:20:27.957250 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:20:27.957258 kernel: audit: type=2000 audit(1722896427.011:1): state=initialized audit_enabled=0 res=1 Aug 5 22:20:27.957266 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:20:27.957308 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:20:27.957316 kernel: cpuidle: using governor menu Aug 5 22:20:27.957323 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:20:27.957331 kernel: dca service started, version 1.12.1 Aug 5 22:20:27.957340 kernel: PCI: Using configuration type 1 for base access Aug 5 22:20:27.957348 kernel: PCI: Using configuration type 1 for extended access Aug 5 22:20:27.957355 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:20:27.957369 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:20:27.957378 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:20:27.957385 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:20:27.957392 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:20:27.957400 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:20:27.957409 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:20:27.957417 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:20:27.957424 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:20:27.957431 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:20:27.957439 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:20:27.957446 kernel: ACPI: Interpreter enabled Aug 5 22:20:27.957454 kernel: ACPI: PM: (supports S0 S3 S5) Aug 5 22:20:27.957461 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:20:27.957469 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:20:27.957476 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:20:27.957486 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 5 22:20:27.957495 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:20:27.957675 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:20:27.957688 kernel: acpiphp: Slot [3] registered Aug 5 22:20:27.957696 kernel: acpiphp: Slot [4] registered Aug 5 22:20:27.957703 kernel: acpiphp: Slot [5] registered Aug 5 22:20:27.957711 kernel: acpiphp: Slot [6] registered Aug 5 22:20:27.957718 kernel: acpiphp: Slot [7] registered Aug 5 22:20:27.957728 kernel: acpiphp: Slot [8] registered Aug 5 22:20:27.957736 kernel: acpiphp: Slot [9] registered Aug 5 22:20:27.957743 kernel: acpiphp: Slot [10] registered Aug 5 22:20:27.957751 kernel: acpiphp: Slot [11] registered Aug 5 22:20:27.957758 kernel: acpiphp: Slot [12] registered Aug 5 22:20:27.957766 kernel: acpiphp: Slot [13] registered Aug 5 22:20:27.957773 kernel: acpiphp: Slot [14] registered Aug 5 22:20:27.957780 kernel: acpiphp: Slot [15] registered Aug 5 22:20:27.957788 kernel: acpiphp: Slot [16] registered Aug 5 22:20:27.957797 kernel: acpiphp: Slot [17] registered Aug 5 22:20:27.957804 kernel: acpiphp: Slot [18] registered Aug 5 22:20:27.957812 kernel: acpiphp: Slot [19] registered Aug 5 22:20:27.957819 kernel: acpiphp: Slot [20] registered Aug 5 22:20:27.957826 kernel: acpiphp: Slot [21] registered Aug 5 22:20:27.957834 kernel: acpiphp: Slot [22] registered Aug 5 22:20:27.957841 kernel: acpiphp: Slot [23] registered Aug 5 22:20:27.957848 kernel: acpiphp: Slot [24] registered Aug 5 22:20:27.957856 kernel: acpiphp: Slot [25] registered Aug 5 22:20:27.957863 kernel: acpiphp: Slot [26] registered Aug 5 22:20:27.957872 kernel: acpiphp: Slot [27] registered Aug 5 22:20:27.957880 kernel: acpiphp: Slot [28] registered Aug 5 22:20:27.957887 kernel: acpiphp: Slot [29] registered Aug 5 22:20:27.957894 kernel: acpiphp: Slot [30] registered Aug 5 22:20:27.957902 kernel: acpiphp: Slot [31] registered Aug 5 22:20:27.957909 kernel: PCI host bridge to bus 0000:00 Aug 5 22:20:27.958043 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:20:27.958155 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:20:27.958282 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:20:27.958403 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Aug 5 22:20:27.958513 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Aug 5 22:20:27.958622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:20:27.958766 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:20:27.958928 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:20:27.959101 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 5 22:20:27.959257 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Aug 5 22:20:27.959442 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 5 22:20:27.959594 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 5 22:20:27.959744 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 5 22:20:27.959893 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 5 22:20:27.960056 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 5 22:20:27.960213 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:20:27.960406 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Aug 5 22:20:27.960570 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Aug 5 22:20:27.960721 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Aug 5 22:20:27.960872 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Aug 5 22:20:27.961027 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 5 22:20:27.961175 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Aug 5 22:20:27.961348 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:20:27.961523 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:20:27.961678 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Aug 5 22:20:27.961828 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Aug 5 22:20:27.961958 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Aug 5 22:20:27.962089 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 5 22:20:27.962214 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 5 22:20:27.962351 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Aug 5 22:20:27.962480 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Aug 5 22:20:27.962610 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Aug 5 22:20:27.962731 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 5 22:20:27.962850 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Aug 5 22:20:27.962970 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Aug 5 22:20:27.963089 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Aug 5 22:20:27.963103 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:20:27.963111 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:20:27.963119 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:20:27.963126 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:20:27.963134 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:20:27.963141 kernel: iommu: Default domain type: Translated Aug 5 22:20:27.963149 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:20:27.963156 kernel: efivars: Registered efivars operations Aug 5 22:20:27.963164 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:20:27.963174 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:20:27.963181 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Aug 5 22:20:27.963189 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Aug 5 22:20:27.963196 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Aug 5 22:20:27.963203 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Aug 5 22:20:27.963337 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 5 22:20:27.963467 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 5 22:20:27.963587 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:20:27.963601 kernel: vgaarb: loaded Aug 5 22:20:27.963608 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 5 22:20:27.963616 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 5 22:20:27.963623 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:20:27.963631 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:20:27.963638 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:20:27.963646 kernel: pnp: PnP ACPI init Aug 5 22:20:27.963780 kernel: pnp 00:02: [dma 2] Aug 5 22:20:27.963794 kernel: pnp: PnP ACPI: found 6 devices Aug 5 22:20:27.963802 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:20:27.963810 kernel: NET: Registered PF_INET protocol family Aug 5 22:20:27.963817 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:20:27.963825 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:20:27.963833 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:20:27.963840 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:20:27.963848 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:20:27.963855 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:20:27.963865 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:20:27.963873 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:20:27.963880 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:20:27.963888 kernel: NET: Registered PF_XDP protocol family Aug 5 22:20:27.964009 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Aug 5 22:20:27.964130 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Aug 5 22:20:27.964241 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:20:27.964382 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:20:27.964498 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:20:27.964609 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Aug 5 22:20:27.964718 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Aug 5 22:20:27.964852 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 5 22:20:27.964972 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:20:27.964983 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:20:27.964990 kernel: Initialise system trusted keyrings Aug 5 22:20:27.964998 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:20:27.965010 kernel: Key type asymmetric registered Aug 5 22:20:27.965017 kernel: Asymmetric key parser 'x509' registered Aug 5 22:20:27.965025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:20:27.965035 kernel: io scheduler mq-deadline registered Aug 5 22:20:27.965051 kernel: io scheduler kyber registered Aug 5 22:20:27.965062 kernel: io scheduler bfq registered Aug 5 22:20:27.965073 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:20:27.965083 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 5 22:20:27.965094 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 5 22:20:27.965109 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 5 22:20:27.965116 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:20:27.965124 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:20:27.965132 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:20:27.965155 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:20:27.965165 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:20:27.965173 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:20:27.965381 kernel: rtc_cmos 00:05: RTC can wake from S4 Aug 5 22:20:27.965497 kernel: rtc_cmos 00:05: registered as rtc0 Aug 5 22:20:27.965636 kernel: rtc_cmos 00:05: setting system clock to 2024-08-05T22:20:27 UTC (1722896427) Aug 5 22:20:27.965754 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 5 22:20:27.965764 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 5 22:20:27.965772 kernel: efifb: probing for efifb Aug 5 22:20:27.965779 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Aug 5 22:20:27.965787 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Aug 5 22:20:27.965795 kernel: efifb: scrolling: redraw Aug 5 22:20:27.965803 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Aug 5 22:20:27.965815 kernel: Console: switching to colour frame buffer device 100x37 Aug 5 22:20:27.965823 kernel: fb0: EFI VGA frame buffer device Aug 5 22:20:27.965831 kernel: pstore: Using crash dump compression: deflate Aug 5 22:20:27.965841 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:20:27.965849 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:20:27.965857 kernel: Segment Routing with IPv6 Aug 5 22:20:27.965865 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:20:27.965872 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:20:27.965880 kernel: Key type dns_resolver registered Aug 5 22:20:27.965890 kernel: IPI shorthand broadcast: enabled Aug 5 22:20:27.965898 kernel: sched_clock: Marking stable (700004493, 111694913)->(863443456, -51744050) Aug 5 22:20:27.965908 kernel: registered taskstats version 1 Aug 5 22:20:27.965916 kernel: Loading compiled-in X.509 certificates Aug 5 22:20:27.965924 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 5 22:20:27.965932 kernel: Key type .fscrypt registered Aug 5 22:20:27.965941 kernel: Key type fscrypt-provisioning registered Aug 5 22:20:27.965949 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:20:27.965957 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:20:27.965965 kernel: ima: No architecture policies found Aug 5 22:20:27.965973 kernel: clk: Disabling unused clocks Aug 5 22:20:27.965981 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 5 22:20:27.965989 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:20:27.965997 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:20:27.966004 kernel: Run /init as init process Aug 5 22:20:27.966014 kernel: with arguments: Aug 5 22:20:27.966022 kernel: /init Aug 5 22:20:27.966030 kernel: with environment: Aug 5 22:20:27.966037 kernel: HOME=/ Aug 5 22:20:27.966056 kernel: TERM=linux Aug 5 22:20:27.966064 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:20:27.966082 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:20:27.966102 systemd[1]: Detected virtualization kvm. Aug 5 22:20:27.966111 systemd[1]: Detected architecture x86-64. Aug 5 22:20:27.966126 systemd[1]: Running in initrd. Aug 5 22:20:27.966149 systemd[1]: No hostname configured, using default hostname. Aug 5 22:20:27.966158 systemd[1]: Hostname set to . Aug 5 22:20:27.966167 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:20:27.966175 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:20:27.966184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:20:27.966194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:20:27.966204 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:20:27.966212 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:20:27.966221 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:20:27.966229 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:20:27.966239 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:20:27.966248 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:20:27.966258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:20:27.966279 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:20:27.966288 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:20:27.966296 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:20:27.966305 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:20:27.966313 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:20:27.966324 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:20:27.966336 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:20:27.966348 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:20:27.966356 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:20:27.966374 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:20:27.966383 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:20:27.966391 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:20:27.966400 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:20:27.966408 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:20:27.966424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:20:27.966433 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:20:27.966444 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:20:27.966452 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:20:27.966461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:20:27.966469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:20:27.966478 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:20:27.966486 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:20:27.966494 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:20:27.966526 systemd-journald[193]: Collecting audit messages is disabled. Aug 5 22:20:27.966546 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:20:27.966556 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:20:27.966564 systemd-journald[193]: Journal started Aug 5 22:20:27.966582 systemd-journald[193]: Runtime Journal (/run/log/journal/d7f7ed361721480aa563fa54ed26145e) is 6.0M, max 48.3M, 42.3M free. Aug 5 22:20:27.969305 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:20:27.970598 systemd-modules-load[194]: Inserted module 'overlay' Aug 5 22:20:27.970622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:20:27.981506 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:20:27.985257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:20:27.989430 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:20:28.000659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:20:28.004746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:20:28.005885 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:20:28.013298 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:20:28.016489 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 5 22:20:28.017639 kernel: Bridge firewalling registered Aug 5 22:20:28.018200 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:20:28.021475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:20:28.021909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:20:28.033227 dracut-cmdline[220]: dracut-dracut-053 Aug 5 22:20:28.036723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:20:28.052260 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:20:28.062498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:20:28.093117 systemd-resolved[238]: Positive Trust Anchors: Aug 5 22:20:28.093144 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:20:28.093186 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:20:28.096399 systemd-resolved[238]: Defaulting to hostname 'linux'. Aug 5 22:20:28.097555 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:20:28.103918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:20:28.165321 kernel: SCSI subsystem initialized Aug 5 22:20:28.176309 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:20:28.188313 kernel: iscsi: registered transport (tcp) Aug 5 22:20:28.214399 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:20:28.214475 kernel: QLogic iSCSI HBA Driver Aug 5 22:20:28.264926 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:20:28.276495 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:20:28.308760 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:20:28.308835 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:20:28.309835 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:20:28.357308 kernel: raid6: avx2x4 gen() 23497 MB/s Aug 5 22:20:28.374307 kernel: raid6: avx2x2 gen() 24779 MB/s Aug 5 22:20:28.391610 kernel: raid6: avx2x1 gen() 20323 MB/s Aug 5 22:20:28.391698 kernel: raid6: using algorithm avx2x2 gen() 24779 MB/s Aug 5 22:20:28.409743 kernel: raid6: .... xor() 13901 MB/s, rmw enabled Aug 5 22:20:28.409770 kernel: raid6: using avx2x2 recovery algorithm Aug 5 22:20:28.441308 kernel: xor: automatically using best checksumming function avx Aug 5 22:20:28.661315 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:20:28.676931 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:20:28.687507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:20:28.702220 systemd-udevd[411]: Using default interface naming scheme 'v255'. Aug 5 22:20:28.706703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:20:28.714461 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:20:28.727580 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Aug 5 22:20:28.759365 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:20:28.771554 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:20:28.836746 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:20:28.845521 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:20:28.861814 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:20:28.865204 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:20:28.867760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:20:28.870180 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:20:28.877292 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:20:28.882432 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:20:28.893389 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 5 22:20:28.912404 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:20:28.912421 kernel: AES CTR mode by8 optimization enabled Aug 5 22:20:28.912432 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:20:28.912573 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:20:28.912584 kernel: GPT:9289727 != 19775487 Aug 5 22:20:28.912602 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:20:28.912612 kernel: GPT:9289727 != 19775487 Aug 5 22:20:28.912622 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:20:28.912738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:20:28.896606 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:20:28.896718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:20:28.898554 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:20:28.900348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:20:28.900503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:20:28.901943 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:20:28.914602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:20:28.918742 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:20:28.933330 kernel: libata version 3.00 loaded. Aug 5 22:20:28.934450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:20:28.934601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:20:28.939837 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 5 22:20:28.951035 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (470) Aug 5 22:20:28.951049 kernel: scsi host0: ata_piix Aug 5 22:20:28.951215 kernel: scsi host1: ata_piix Aug 5 22:20:28.951391 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Aug 5 22:20:28.951402 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (466) Aug 5 22:20:28.951412 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Aug 5 22:20:28.953652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:20:28.960835 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:20:28.974819 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:20:28.978929 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:20:28.979516 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:20:28.991408 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:20:28.993136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:20:29.009094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:20:29.013002 disk-uuid[538]: Primary Header is updated. Aug 5 22:20:29.013002 disk-uuid[538]: Secondary Entries is updated. Aug 5 22:20:29.013002 disk-uuid[538]: Secondary Header is updated. Aug 5 22:20:29.016481 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:20:29.012859 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:20:29.050162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:20:29.109364 kernel: ata2: found unknown device (class 0) Aug 5 22:20:29.111297 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 5 22:20:29.113294 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 5 22:20:29.158432 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 5 22:20:29.171418 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:20:29.171442 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Aug 5 22:20:30.025333 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:20:30.025414 disk-uuid[543]: The operation has completed successfully. Aug 5 22:20:30.057874 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:20:30.058007 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:20:30.077645 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:20:30.081777 sh[582]: Success Aug 5 22:20:30.096290 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 5 22:20:30.132926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:20:30.147940 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:20:30.152612 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:20:30.161820 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 5 22:20:30.161872 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:20:30.161887 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:20:30.162859 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:20:30.163610 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:20:30.167945 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:20:30.170402 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:20:30.179408 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:20:30.182117 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:20:30.190872 kernel: BTRFS info (device vda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:20:30.190916 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:20:30.190928 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:20:30.194399 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:20:30.203436 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:20:30.205354 kernel: BTRFS info (device vda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:20:30.279371 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:20:30.293642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:20:30.319398 systemd-networkd[760]: lo: Link UP Aug 5 22:20:30.319409 systemd-networkd[760]: lo: Gained carrier Aug 5 22:20:30.321004 systemd-networkd[760]: Enumeration completed Aug 5 22:20:30.321403 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:20:30.321407 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:20:30.322168 systemd-networkd[760]: eth0: Link UP Aug 5 22:20:30.322171 systemd-networkd[760]: eth0: Gained carrier Aug 5 22:20:30.322178 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:20:30.326846 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:20:30.333115 systemd[1]: Reached target network.target - Network. Aug 5 22:20:30.376404 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:20:30.717340 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:20:30.730570 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:20:30.778490 ignition[765]: Ignition 2.18.0 Aug 5 22:20:30.778501 ignition[765]: Stage: fetch-offline Aug 5 22:20:30.778540 ignition[765]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:20:30.778551 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:20:30.778730 ignition[765]: parsed url from cmdline: "" Aug 5 22:20:30.778734 ignition[765]: no config URL provided Aug 5 22:20:30.778739 ignition[765]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:20:30.778748 ignition[765]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:20:30.778774 ignition[765]: op(1): [started] loading QEMU firmware config module Aug 5 22:20:30.778779 ignition[765]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:20:30.785109 ignition[765]: op(1): [finished] loading QEMU firmware config module Aug 5 22:20:30.824905 ignition[765]: parsing config with SHA512: 54a2c91550979b2f52781ae9010a331cd7c6986ed9e60982cfda5ba2f0e8dc2fedc3b1a62a32b290d524bc2c02dadc7a9f97eb96e8315d2a3e71b53402433d7f Aug 5 22:20:30.828802 unknown[765]: fetched base config from "system" Aug 5 22:20:30.828815 unknown[765]: fetched user config from "qemu" Aug 5 22:20:30.829334 ignition[765]: fetch-offline: fetch-offline passed Aug 5 22:20:30.829410 ignition[765]: Ignition finished successfully Aug 5 22:20:30.831686 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:20:30.833722 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:20:30.845466 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:20:30.859668 ignition[775]: Ignition 2.18.0 Aug 5 22:20:30.859678 ignition[775]: Stage: kargs Aug 5 22:20:30.859812 ignition[775]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:20:30.859822 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:20:30.860559 ignition[775]: kargs: kargs passed Aug 5 22:20:30.860602 ignition[775]: Ignition finished successfully Aug 5 22:20:30.866958 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:20:30.876616 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:20:30.888022 ignition[785]: Ignition 2.18.0 Aug 5 22:20:30.888032 ignition[785]: Stage: disks Aug 5 22:20:30.888173 ignition[785]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:20:30.888183 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:20:30.888974 ignition[785]: disks: disks passed Aug 5 22:20:30.889018 ignition[785]: Ignition finished successfully Aug 5 22:20:30.895013 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:20:30.895719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:20:30.897510 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:20:30.899802 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:20:30.902563 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:20:30.905991 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:20:30.916588 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:20:30.937827 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:20:31.335299 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:20:31.348507 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:20:31.460294 kernel: EXT4-fs (vda9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 5 22:20:31.460525 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:20:31.463112 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:20:31.477362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:20:31.480348 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:20:31.483412 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:20:31.483471 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:20:31.485365 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:20:31.487556 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Aug 5 22:20:31.490253 kernel: BTRFS info (device vda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:20:31.490295 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:20:31.490310 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:20:31.494296 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:20:31.496602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:20:31.498560 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:20:31.502085 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:20:31.539627 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:20:31.544217 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:20:31.548547 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:20:31.553910 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:20:31.639060 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:20:31.653402 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:20:31.656352 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:20:31.663696 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:20:31.664902 kernel: BTRFS info (device vda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:20:31.683958 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:20:31.754474 ignition[921]: INFO : Ignition 2.18.0 Aug 5 22:20:31.754474 ignition[921]: INFO : Stage: mount Aug 5 22:20:31.768895 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:20:31.768895 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:20:31.768895 ignition[921]: INFO : mount: mount passed Aug 5 22:20:31.768895 ignition[921]: INFO : Ignition finished successfully Aug 5 22:20:31.775881 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:20:31.782546 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:20:31.789647 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:20:31.803302 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (931) Aug 5 22:20:31.807615 kernel: BTRFS info (device vda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:20:31.807648 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:20:31.807662 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:20:31.811329 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:20:31.814416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:20:31.838793 ignition[949]: INFO : Ignition 2.18.0 Aug 5 22:20:31.838793 ignition[949]: INFO : Stage: files Aug 5 22:20:31.860486 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:20:31.860486 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:20:31.860486 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:20:31.864389 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:20:31.864389 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:20:31.869229 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:20:31.870844 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:20:31.872539 unknown[949]: wrote ssh authorized keys file for user: core Aug 5 22:20:31.873704 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:20:31.875173 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:20:31.877153 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:20:31.899541 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:20:31.952926 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:20:31.955588 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Aug 5 22:20:32.204473 systemd-networkd[760]: eth0: Gained IPv6LL Aug 5 22:20:32.330499 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:20:32.696332 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:20:32.696332 ignition[949]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 5 22:20:32.700593 ignition[949]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:20:32.720057 ignition[949]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:20:32.725751 ignition[949]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:20:32.727531 ignition[949]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:20:32.727531 ignition[949]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:20:32.727531 ignition[949]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:20:32.727531 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:20:32.727531 ignition[949]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:20:32.727531 ignition[949]: INFO : files: files passed Aug 5 22:20:32.727531 ignition[949]: INFO : Ignition finished successfully Aug 5 22:20:32.729203 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:20:32.737567 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:20:32.740397 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:20:32.742177 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:20:32.742321 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:20:32.751668 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:20:32.753360 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:20:32.753360 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:20:32.758810 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:20:32.756341 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:20:32.759615 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:20:32.773565 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:20:32.807125 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:20:32.807307 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:20:32.809953 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:20:32.812348 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:20:32.814447 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:20:32.815513 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:20:32.840698 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:20:32.854543 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:20:32.867509 systemd[1]: Stopped target network.target - Network. Aug 5 22:20:32.869532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:20:32.870103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:20:32.870650 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:20:32.870965 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:20:32.871125 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:20:32.879624 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:20:32.880349 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:20:32.881084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:20:32.881705 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:20:32.882090 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:20:32.883648 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:20:32.884023 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:20:32.884404 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:20:32.884872 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:20:32.885192 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:20:32.885662 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:20:32.885837 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:20:32.902095 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:20:32.902819 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:20:32.903120 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:20:32.903266 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:20:32.909839 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:20:32.909986 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:20:32.910953 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:20:32.911057 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:20:32.914709 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:20:32.916693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:20:32.921368 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:20:32.922786 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:20:32.925244 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:20:32.927103 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:20:32.927244 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:20:32.929024 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:20:32.929144 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:20:32.930882 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:20:32.931002 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:20:32.932999 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:20:32.933131 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:20:32.951487 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:20:32.953202 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:20:32.954536 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:20:32.956985 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:20:32.958151 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:20:32.958467 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:20:32.960285 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:20:32.960475 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:20:32.963379 systemd-networkd[760]: eth0: DHCPv6 lease lost Aug 5 22:20:32.968258 ignition[1003]: INFO : Ignition 2.18.0 Aug 5 22:20:32.968258 ignition[1003]: INFO : Stage: umount Aug 5 22:20:32.968258 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:20:32.968258 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:20:32.968258 ignition[1003]: INFO : umount: umount passed Aug 5 22:20:32.968258 ignition[1003]: INFO : Ignition finished successfully Aug 5 22:20:32.968403 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:20:32.969048 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:20:32.971086 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:20:32.971250 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:20:32.974721 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:20:32.974920 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:20:33.005619 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:20:33.005744 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:20:33.010771 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:20:33.011718 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:20:33.011787 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:20:33.013143 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:20:33.013205 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:20:33.015347 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:20:33.015398 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:20:33.017370 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:20:33.017420 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:20:33.019770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:20:33.019827 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:20:33.038503 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:20:33.040714 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:20:33.040804 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:20:33.043199 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:20:33.043264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:20:33.045931 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:20:33.045980 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:20:33.048067 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:20:33.048116 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:20:33.050582 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:20:33.069309 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:20:33.070495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:20:33.073679 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:20:33.074802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:20:33.077572 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:20:33.078742 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:20:33.081093 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:20:33.081158 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:20:33.083378 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:20:33.083445 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:20:33.087800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:20:33.087869 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:20:33.091251 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:20:33.091329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:20:33.108457 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:20:33.110905 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:20:33.110977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:20:33.113449 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:20:33.114726 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:20:33.118487 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:20:33.119497 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:20:33.122004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:20:33.123030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:20:33.125634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:20:33.126826 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:20:33.502130 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:20:33.503264 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:20:33.505801 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:20:33.507964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:20:33.508979 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:20:33.523576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:20:33.535048 systemd[1]: Switching root. Aug 5 22:20:33.570566 systemd-journald[193]: Journal stopped Aug 5 22:20:35.238950 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 5 22:20:35.239010 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:20:35.239024 kernel: SELinux: policy capability open_perms=1 Aug 5 22:20:35.239036 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:20:35.239052 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:20:35.239063 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:20:35.239075 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:20:35.239090 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:20:35.239100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:20:35.239112 kernel: audit: type=1403 audit(1722896434.453:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:20:35.239124 systemd[1]: Successfully loaded SELinux policy in 40.822ms. Aug 5 22:20:35.239159 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.014ms. Aug 5 22:20:35.239172 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:20:35.239185 systemd[1]: Detected virtualization kvm. Aug 5 22:20:35.239197 systemd[1]: Detected architecture x86-64. Aug 5 22:20:35.239218 systemd[1]: Detected first boot. Aug 5 22:20:35.239234 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:20:35.239253 zram_generator::config[1047]: No configuration found. Aug 5 22:20:35.239267 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:20:35.239290 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:20:35.239302 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:20:35.239323 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:20:35.239336 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:20:35.239348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:20:35.239362 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:20:35.239374 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:20:35.239386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:20:35.239398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:20:35.239410 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:20:35.239427 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:20:35.239439 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:20:35.239451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:20:35.239466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:20:35.239478 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:20:35.239490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:20:35.239503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:20:35.239518 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:20:35.239534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:20:35.239551 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:20:35.239567 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:20:35.239579 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:20:35.239594 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:20:35.239606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:20:35.239618 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:20:35.239630 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:20:35.239642 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:20:35.239654 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:20:35.239666 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:20:35.239678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:20:35.239693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:20:35.239706 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:20:35.239720 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:20:35.239732 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:20:35.239744 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:20:35.239755 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:20:35.239778 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:20:35.239800 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:20:35.239819 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:20:35.239837 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:20:35.239850 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:20:35.239862 systemd[1]: Reached target machines.target - Containers. Aug 5 22:20:35.239874 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:20:35.239886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:20:35.239898 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:20:35.239910 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:20:35.239921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:20:35.239935 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:20:35.239947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:20:35.239959 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:20:35.239971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:20:35.239984 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:20:35.239996 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:20:35.240008 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:20:35.240020 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:20:35.240031 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:20:35.240046 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:20:35.240057 kernel: loop: module loaded Aug 5 22:20:35.240088 systemd-journald[1109]: Collecting audit messages is disabled. Aug 5 22:20:35.240118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:20:35.240135 kernel: fuse: init (API version 7.39) Aug 5 22:20:35.240159 systemd-journald[1109]: Journal started Aug 5 22:20:35.240184 systemd-journald[1109]: Runtime Journal (/run/log/journal/d7f7ed361721480aa563fa54ed26145e) is 6.0M, max 48.3M, 42.3M free. Aug 5 22:20:34.992502 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:20:35.011234 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:20:35.011733 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:20:35.242303 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:20:35.246310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:20:35.250996 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:20:35.254290 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:20:35.255407 systemd[1]: Stopped verity-setup.service. Aug 5 22:20:35.258289 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:20:35.262196 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:20:35.262554 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:20:35.263793 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:20:35.265078 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:20:35.266250 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:20:35.267532 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:20:35.268881 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:20:35.270250 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:20:35.271869 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:20:35.272062 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:20:35.273647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:20:35.273849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:20:35.275375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:20:35.275557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:20:35.285356 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:20:35.285539 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:20:35.287301 kernel: ACPI: bus type drm_connector registered Aug 5 22:20:35.287732 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:20:35.287906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:20:35.289556 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:20:35.289740 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:20:35.291523 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:20:35.293085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:20:35.294817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:20:35.308295 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:20:35.317378 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:20:35.319805 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:20:35.321049 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:20:35.321079 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:20:35.323094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:20:35.325476 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:20:35.329345 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:20:35.330759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:20:35.333665 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:20:35.335955 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:20:35.337250 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:20:35.338394 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:20:35.338886 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:20:35.343620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:20:35.350259 systemd-journald[1109]: Time spent on flushing to /var/log/journal/d7f7ed361721480aa563fa54ed26145e is 27.638ms for 987 entries. Aug 5 22:20:35.350259 systemd-journald[1109]: System Journal (/var/log/journal/d7f7ed361721480aa563fa54ed26145e) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:20:35.384632 systemd-journald[1109]: Received client request to flush runtime journal. Aug 5 22:20:35.384666 kernel: loop0: detected capacity change from 0 to 80568 Aug 5 22:20:35.384680 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:20:35.349597 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:20:35.359028 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:20:35.363387 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:20:35.365087 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:20:35.367223 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:20:35.368869 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:20:35.371679 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:20:35.378760 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:20:35.384011 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:20:35.394873 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:20:35.401422 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:20:35.403690 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:20:35.406888 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:20:35.414376 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:20:35.415252 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Aug 5 22:20:35.415284 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Aug 5 22:20:35.423535 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:20:35.429102 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:20:35.432317 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:20:35.433019 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:20:35.442196 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:20:35.449299 kernel: loop1: detected capacity change from 0 to 139904 Aug 5 22:20:35.471739 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:20:35.481434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:20:35.486123 kernel: loop2: detected capacity change from 0 to 211296 Aug 5 22:20:35.500606 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Aug 5 22:20:35.500627 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Aug 5 22:20:35.505881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:20:35.523319 kernel: loop3: detected capacity change from 0 to 80568 Aug 5 22:20:35.532314 kernel: loop4: detected capacity change from 0 to 139904 Aug 5 22:20:35.545658 kernel: loop5: detected capacity change from 0 to 211296 Aug 5 22:20:35.552613 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:20:35.553208 (sd-merge)[1188]: Merged extensions into '/usr'. Aug 5 22:20:35.557787 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:20:35.557802 systemd[1]: Reloading... Aug 5 22:20:35.611417 zram_generator::config[1211]: No configuration found. Aug 5 22:20:35.693906 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:20:35.774207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:20:35.846664 systemd[1]: Reloading finished in 288 ms. Aug 5 22:20:35.879459 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:20:35.881191 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:20:35.896596 systemd[1]: Starting ensure-sysext.service... Aug 5 22:20:35.899437 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:20:35.906921 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:20:35.906941 systemd[1]: Reloading... Aug 5 22:20:35.952263 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:20:35.952831 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:20:35.954070 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:20:35.954739 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 5 22:20:35.954875 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 5 22:20:35.959941 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:20:35.959954 systemd-tmpfiles[1252]: Skipping /boot Aug 5 22:20:35.964297 zram_generator::config[1277]: No configuration found. Aug 5 22:20:35.974562 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:20:35.974690 systemd-tmpfiles[1252]: Skipping /boot Aug 5 22:20:36.076571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:20:36.126595 systemd[1]: Reloading finished in 219 ms. Aug 5 22:20:36.145066 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:20:36.156852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:20:36.164432 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:20:36.167089 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:20:36.170442 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:20:36.175054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:20:36.178497 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:20:36.181427 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:20:36.190355 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:20:36.192977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:20:36.193156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:20:36.197496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:20:36.200231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:20:36.207175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:20:36.208453 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:20:36.208561 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:20:36.214705 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:20:36.217095 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Aug 5 22:20:36.218646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:20:36.218834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:20:36.220851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:20:36.221045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:20:36.223062 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:20:36.223436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:20:36.232625 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:20:36.237236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:20:36.237686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:20:36.246726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:20:36.249410 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:20:36.252509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:20:36.258095 augenrules[1351]: No rules Aug 5 22:20:36.258529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:20:36.260407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:20:36.262448 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:20:36.266424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:20:36.267444 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:20:36.269037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:20:36.271206 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:20:36.272868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:20:36.273050 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:20:36.280599 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:20:36.280808 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:20:36.287820 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:20:36.290352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:20:36.290552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:20:36.295736 systemd[1]: Finished ensure-sysext.service. Aug 5 22:20:36.297255 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:20:36.297502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:20:36.306173 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:20:36.321041 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:20:36.332541 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:20:36.333726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:20:36.333802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:20:36.340305 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Aug 5 22:20:36.344536 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:20:36.345893 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:20:36.348315 systemd-resolved[1320]: Positive Trust Anchors: Aug 5 22:20:36.348323 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:20:36.348355 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:20:36.354580 systemd-resolved[1320]: Defaulting to hostname 'linux'. Aug 5 22:20:36.356934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:20:36.359441 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:20:36.368320 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1366) Aug 5 22:20:36.398400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:20:36.406239 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 5 22:20:36.415567 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:20:36.418846 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:20:36.418885 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Aug 5 22:20:36.426993 systemd-networkd[1392]: lo: Link UP Aug 5 22:20:36.427483 systemd-networkd[1392]: lo: Gained carrier Aug 5 22:20:36.430234 systemd-networkd[1392]: Enumeration completed Aug 5 22:20:36.431320 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 5 22:20:36.431421 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:20:36.431744 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:20:36.431749 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:20:36.432680 systemd-networkd[1392]: eth0: Link UP Aug 5 22:20:36.432684 systemd-networkd[1392]: eth0: Gained carrier Aug 5 22:20:36.432696 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:20:36.432985 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:20:36.434494 systemd[1]: Reached target network.target - Network. Aug 5 22:20:36.435486 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:20:36.446520 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:20:36.448124 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:20:36.451376 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:20:36.453760 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 5 22:20:36.454910 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:20:36.455026 systemd-timesyncd[1394]: Initial clock synchronization to Mon 2024-08-05 22:20:36.686824 UTC. Aug 5 22:20:36.535306 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:20:36.539599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:20:36.544420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:20:36.544833 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:20:36.553496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:20:36.563517 kernel: kvm_amd: TSC scaling supported Aug 5 22:20:36.563564 kernel: kvm_amd: Nested Virtualization enabled Aug 5 22:20:36.563582 kernel: kvm_amd: Nested Paging enabled Aug 5 22:20:36.563622 kernel: kvm_amd: LBR virtualization supported Aug 5 22:20:36.564726 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 5 22:20:36.564749 kernel: kvm_amd: Virtual GIF supported Aug 5 22:20:36.585301 kernel: EDAC MC: Ver: 3.0.0 Aug 5 22:20:36.616539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:20:36.620256 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:20:36.632526 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:20:36.640362 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:20:36.674442 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:20:36.676184 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:20:36.677456 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:20:36.678748 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:20:36.680152 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:20:36.681729 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:20:36.683119 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:20:36.684928 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:20:36.688545 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:20:36.688595 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:20:36.689925 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:20:36.693083 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:20:36.697588 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:20:36.716432 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:20:36.719681 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:20:36.721738 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:20:36.723296 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:20:36.724649 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:20:36.725967 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:20:36.726002 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:20:36.727171 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:20:36.728935 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:20:36.729926 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:20:36.734953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:20:36.738425 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:20:36.740952 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:20:36.743219 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:20:36.746285 jq[1425]: false Aug 5 22:20:36.746745 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:20:36.752483 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:20:36.756590 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:20:36.762737 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:20:36.765657 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:20:36.766306 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:20:36.767730 dbus-daemon[1424]: [system] SELinux support is enabled Aug 5 22:20:36.769475 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:20:36.772725 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:20:36.775204 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:20:36.781021 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:20:36.781610 extend-filesystems[1426]: Found loop3 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found loop4 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found loop5 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found sr0 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda1 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda2 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda3 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found usr Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda4 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda6 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda7 Aug 5 22:20:36.784067 extend-filesystems[1426]: Found vda9 Aug 5 22:20:36.784067 extend-filesystems[1426]: Checking size of /dev/vda9 Aug 5 22:20:36.787476 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:20:36.806675 jq[1440]: true Aug 5 22:20:36.787767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:20:36.811617 extend-filesystems[1426]: Resized partition /dev/vda9 Aug 5 22:20:36.788149 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:20:36.788478 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:20:36.794817 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:20:36.795087 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:20:36.817659 update_engine[1438]: I0805 22:20:36.817577 1438 main.cc:92] Flatcar Update Engine starting Aug 5 22:20:36.819750 extend-filesystems[1454]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:20:36.824219 update_engine[1438]: I0805 22:20:36.820580 1438 update_check_scheduler.cc:74] Next update check in 11m25s Aug 5 22:20:36.824260 jq[1447]: true Aug 5 22:20:36.827671 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:20:36.831298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1370) Aug 5 22:20:36.834830 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:20:36.842548 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:20:36.843173 tar[1446]: linux-amd64/helm Aug 5 22:20:36.850451 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:20:36.850485 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:20:36.852429 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:20:36.852449 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:20:36.854519 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:20:36.862474 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:20:36.877026 systemd-logind[1436]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:20:36.878696 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:20:36.878696 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:20:36.878696 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:20:36.877054 systemd-logind[1436]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:20:36.903827 extend-filesystems[1426]: Resized filesystem in /dev/vda9 Aug 5 22:20:36.879750 systemd-logind[1436]: New seat seat0. Aug 5 22:20:36.879770 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:20:36.880384 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:20:36.887869 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:20:36.912772 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:20:36.915065 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:20:36.917715 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:20:36.918151 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:20:37.066882 containerd[1451]: time="2024-08-05T22:20:37.066740940Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:20:37.089064 containerd[1451]: time="2024-08-05T22:20:37.089011143Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:20:37.089064 containerd[1451]: time="2024-08-05T22:20:37.089047267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:20:37.090718 containerd[1451]: time="2024-08-05T22:20:37.090632399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:20:37.090718 containerd[1451]: time="2024-08-05T22:20:37.090667595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:20:37.090919 containerd[1451]: time="2024-08-05T22:20:37.090894287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:20:37.090919 containerd[1451]: time="2024-08-05T22:20:37.090913256Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:20:37.091022 containerd[1451]: time="2024-08-05T22:20:37.091007504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:20:37.091084 containerd[1451]: time="2024-08-05T22:20:37.091069041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:20:37.091136 containerd[1451]: time="2024-08-05T22:20:37.091083153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:20:37.091180 containerd[1451]: time="2024-08-05T22:20:37.091167175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:20:37.091444 containerd[1451]: time="2024-08-05T22:20:37.091427847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:20:37.091480 containerd[1451]: time="2024-08-05T22:20:37.091447403Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:20:37.091480 containerd[1451]: time="2024-08-05T22:20:37.091458692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:20:37.091589 containerd[1451]: time="2024-08-05T22:20:37.091573197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:20:37.091619 containerd[1451]: time="2024-08-05T22:20:37.091588084Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:20:37.091704 containerd[1451]: time="2024-08-05T22:20:37.091647580Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:20:37.091704 containerd[1451]: time="2024-08-05T22:20:37.091661765Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:20:37.225541 containerd[1451]: time="2024-08-05T22:20:37.225462885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:20:37.225541 containerd[1451]: time="2024-08-05T22:20:37.225535773Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:20:37.225541 containerd[1451]: time="2024-08-05T22:20:37.225555773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:20:37.225737 containerd[1451]: time="2024-08-05T22:20:37.225600227Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:20:37.225737 containerd[1451]: time="2024-08-05T22:20:37.225621103Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:20:37.225737 containerd[1451]: time="2024-08-05T22:20:37.225637598Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:20:37.225737 containerd[1451]: time="2024-08-05T22:20:37.225652752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:20:37.225940 containerd[1451]: time="2024-08-05T22:20:37.225900641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:20:37.225940 containerd[1451]: time="2024-08-05T22:20:37.225930569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:20:37.225986 containerd[1451]: time="2024-08-05T22:20:37.225947404Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:20:37.225986 containerd[1451]: time="2024-08-05T22:20:37.225966136Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:20:37.226029 containerd[1451]: time="2024-08-05T22:20:37.225983559Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226029 containerd[1451]: time="2024-08-05T22:20:37.226010940Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226065 containerd[1451]: time="2024-08-05T22:20:37.226028002Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226065 containerd[1451]: time="2024-08-05T22:20:37.226044374Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226065 containerd[1451]: time="2024-08-05T22:20:37.226061910Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226128 containerd[1451]: time="2024-08-05T22:20:37.226078497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226128 containerd[1451]: time="2024-08-05T22:20:37.226094714Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226128 containerd[1451]: time="2024-08-05T22:20:37.226109198Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:20:37.226267 containerd[1451]: time="2024-08-05T22:20:37.226240272Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:20:37.226822 containerd[1451]: time="2024-08-05T22:20:37.226770285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:20:37.226822 containerd[1451]: time="2024-08-05T22:20:37.226831078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.226849079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.226879944Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.226955151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.226969667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.226996657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.227009904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.227023513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.227037090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.227049771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.227069534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227086 containerd[1451]: time="2024-08-05T22:20:37.227083111Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227278936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227314504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227329092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227342060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227356834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227372679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227385453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227443 containerd[1451]: time="2024-08-05T22:20:37.227397422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:20:37.227825 containerd[1451]: time="2024-08-05T22:20:37.227664805Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:20:37.227825 containerd[1451]: time="2024-08-05T22:20:37.227722476Z" level=info msg="Connect containerd service" Aug 5 22:20:37.227825 containerd[1451]: time="2024-08-05T22:20:37.227751702Z" level=info msg="using legacy CRI server" Aug 5 22:20:37.227825 containerd[1451]: time="2024-08-05T22:20:37.227759342Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:20:37.228385 containerd[1451]: time="2024-08-05T22:20:37.227854517Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:20:37.228514 containerd[1451]: time="2024-08-05T22:20:37.228468077Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:20:37.228514 containerd[1451]: time="2024-08-05T22:20:37.228506345Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:20:37.228586 containerd[1451]: time="2024-08-05T22:20:37.228532819Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:20:37.228586 containerd[1451]: time="2024-08-05T22:20:37.228546675Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:20:37.228586 containerd[1451]: time="2024-08-05T22:20:37.228558810Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.228798429Z" level=info msg="Start subscribing containerd event" Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.228862358Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.228865904Z" level=info msg="Start recovering state" Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.228917894Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.228963028Z" level=info msg="Start event monitor" Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.228984977Z" level=info msg="Start snapshots syncer" Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.228997255Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.229006523Z" level=info msg="Start streaming server" Aug 5 22:20:37.229222 containerd[1451]: time="2024-08-05T22:20:37.229095833Z" level=info msg="containerd successfully booted in 0.163545s" Aug 5 22:20:37.229443 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:20:37.303951 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:20:37.321451 tar[1446]: linux-amd64/LICENSE Aug 5 22:20:37.321561 tar[1446]: linux-amd64/README.md Aug 5 22:20:37.332120 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:20:37.336203 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:20:37.337948 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:20:37.348558 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:20:37.348817 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:20:37.351894 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:20:37.367156 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:20:37.370502 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:20:37.373013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:20:37.374530 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:20:37.965203 systemd-networkd[1392]: eth0: Gained IPv6LL Aug 5 22:20:37.969243 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:20:37.971340 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:20:37.981625 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:20:37.984630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:20:37.987292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:20:38.014718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:20:38.022023 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:20:38.022366 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:20:38.024494 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:20:39.299759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:20:39.301813 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:20:39.303431 systemd[1]: Startup finished in 861ms (kernel) + 6.715s (initrd) + 4.888s (userspace) = 12.466s. Aug 5 22:20:39.306067 (kubelet)[1536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:20:40.041699 kubelet[1536]: E0805 22:20:40.041578 1536 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:20:40.047227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:20:40.047497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:20:40.047887 systemd[1]: kubelet.service: Consumed 1.843s CPU time. Aug 5 22:20:41.482421 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:20:41.484005 systemd[1]: Started sshd@0-10.0.0.155:22-10.0.0.1:35016.service - OpenSSH per-connection server daemon (10.0.0.1:35016). Aug 5 22:20:41.533076 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 35016 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:20:41.535227 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:41.544787 systemd-logind[1436]: New session 1 of user core. Aug 5 22:20:41.546154 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:20:41.555612 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:20:41.567153 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:20:41.576589 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:20:41.579716 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:41.700812 systemd[1555]: Queued start job for default target default.target. Aug 5 22:20:41.710895 systemd[1555]: Created slice app.slice - User Application Slice. Aug 5 22:20:41.710925 systemd[1555]: Reached target paths.target - Paths. Aug 5 22:20:41.710941 systemd[1555]: Reached target timers.target - Timers. Aug 5 22:20:41.712704 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:20:41.725624 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:20:41.725769 systemd[1555]: Reached target sockets.target - Sockets. Aug 5 22:20:41.725788 systemd[1555]: Reached target basic.target - Basic System. Aug 5 22:20:41.725845 systemd[1555]: Reached target default.target - Main User Target. Aug 5 22:20:41.725882 systemd[1555]: Startup finished in 139ms. Aug 5 22:20:41.726261 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:20:41.728019 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:20:41.791897 systemd[1]: Started sshd@1-10.0.0.155:22-10.0.0.1:35026.service - OpenSSH per-connection server daemon (10.0.0.1:35026). Aug 5 22:20:41.851480 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 35026 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:20:41.853306 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:41.858135 systemd-logind[1436]: New session 2 of user core. Aug 5 22:20:41.868443 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:20:41.924569 sshd[1566]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:41.938513 systemd[1]: sshd@1-10.0.0.155:22-10.0.0.1:35026.service: Deactivated successfully. Aug 5 22:20:41.940253 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:20:41.941797 systemd-logind[1436]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:20:41.948756 systemd[1]: Started sshd@2-10.0.0.155:22-10.0.0.1:35040.service - OpenSSH per-connection server daemon (10.0.0.1:35040). Aug 5 22:20:41.949787 systemd-logind[1436]: Removed session 2. Aug 5 22:20:41.980890 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 35040 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:20:41.982770 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:41.987341 systemd-logind[1436]: New session 3 of user core. Aug 5 22:20:42.002435 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:20:42.053544 sshd[1573]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:42.064335 systemd[1]: sshd@2-10.0.0.155:22-10.0.0.1:35040.service: Deactivated successfully. Aug 5 22:20:42.066134 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:20:42.067630 systemd-logind[1436]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:20:42.080724 systemd[1]: Started sshd@3-10.0.0.155:22-10.0.0.1:35054.service - OpenSSH per-connection server daemon (10.0.0.1:35054). Aug 5 22:20:42.081845 systemd-logind[1436]: Removed session 3. Aug 5 22:20:42.112653 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 35054 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:20:42.114364 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:42.118410 systemd-logind[1436]: New session 4 of user core. Aug 5 22:20:42.131487 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:20:42.187196 sshd[1581]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:42.198997 systemd[1]: sshd@3-10.0.0.155:22-10.0.0.1:35054.service: Deactivated successfully. Aug 5 22:20:42.201427 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:20:42.203643 systemd-logind[1436]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:20:42.205262 systemd[1]: Started sshd@4-10.0.0.155:22-10.0.0.1:35062.service - OpenSSH per-connection server daemon (10.0.0.1:35062). Aug 5 22:20:42.206402 systemd-logind[1436]: Removed session 4. Aug 5 22:20:42.243599 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 35062 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:20:42.245379 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:42.249582 systemd-logind[1436]: New session 5 of user core. Aug 5 22:20:42.257429 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:20:42.318085 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:20:42.318516 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:20:42.335178 sudo[1591]: pam_unix(sudo:session): session closed for user root Aug 5 22:20:42.337617 sshd[1588]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:42.351523 systemd[1]: sshd@4-10.0.0.155:22-10.0.0.1:35062.service: Deactivated successfully. Aug 5 22:20:42.353437 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:20:42.355268 systemd-logind[1436]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:20:42.356708 systemd[1]: Started sshd@5-10.0.0.155:22-10.0.0.1:35074.service - OpenSSH per-connection server daemon (10.0.0.1:35074). Aug 5 22:20:42.357497 systemd-logind[1436]: Removed session 5. Aug 5 22:20:42.395354 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:20:42.397187 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:42.401482 systemd-logind[1436]: New session 6 of user core. Aug 5 22:20:42.411462 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:20:42.467318 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:20:42.467639 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:20:42.471804 sudo[1600]: pam_unix(sudo:session): session closed for user root Aug 5 22:20:42.479837 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:20:42.480223 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:20:42.502567 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:20:42.504171 auditctl[1603]: No rules Aug 5 22:20:42.505569 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:20:42.505835 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:20:42.507665 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:20:42.539727 augenrules[1621]: No rules Aug 5 22:20:42.541564 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:20:42.542914 sudo[1599]: pam_unix(sudo:session): session closed for user root Aug 5 22:20:42.544702 sshd[1596]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:42.561658 systemd[1]: sshd@5-10.0.0.155:22-10.0.0.1:35074.service: Deactivated successfully. Aug 5 22:20:42.563615 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:20:42.565446 systemd-logind[1436]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:20:42.566820 systemd[1]: Started sshd@6-10.0.0.155:22-10.0.0.1:35080.service - OpenSSH per-connection server daemon (10.0.0.1:35080). Aug 5 22:20:42.567590 systemd-logind[1436]: Removed session 6. Aug 5 22:20:42.604186 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 35080 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:20:42.605677 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:42.609805 systemd-logind[1436]: New session 7 of user core. Aug 5 22:20:42.619640 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:20:42.674532 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:20:42.674826 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:20:42.776557 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:20:42.776725 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:20:43.039115 dockerd[1642]: time="2024-08-05T22:20:43.038956724Z" level=info msg="Starting up" Aug 5 22:20:44.311728 dockerd[1642]: time="2024-08-05T22:20:44.311674230Z" level=info msg="Loading containers: start." Aug 5 22:20:44.436317 kernel: Initializing XFRM netlink socket Aug 5 22:20:44.516624 systemd-networkd[1392]: docker0: Link UP Aug 5 22:20:44.537873 dockerd[1642]: time="2024-08-05T22:20:44.537827074Z" level=info msg="Loading containers: done." Aug 5 22:20:44.589194 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1193009821-merged.mount: Deactivated successfully. Aug 5 22:20:44.590783 dockerd[1642]: time="2024-08-05T22:20:44.590721541Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:20:44.590959 dockerd[1642]: time="2024-08-05T22:20:44.590919791Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:20:44.591075 dockerd[1642]: time="2024-08-05T22:20:44.591047395Z" level=info msg="Daemon has completed initialization" Aug 5 22:20:44.624113 dockerd[1642]: time="2024-08-05T22:20:44.624051337Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:20:44.624257 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:20:45.298412 containerd[1451]: time="2024-08-05T22:20:45.298350734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 22:20:46.043502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596364178.mount: Deactivated successfully. Aug 5 22:20:47.149510 containerd[1451]: time="2024-08-05T22:20:47.149441450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:47.150438 containerd[1451]: time="2024-08-05T22:20:47.150402566Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=35232396" Aug 5 22:20:47.151499 containerd[1451]: time="2024-08-05T22:20:47.151458899Z" level=info msg="ImageCreate event name:\"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:47.155077 containerd[1451]: time="2024-08-05T22:20:47.155035929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:47.156511 containerd[1451]: time="2024-08-05T22:20:47.156453066Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"35229196\" in 1.858057686s" Aug 5 22:20:47.156560 containerd[1451]: time="2024-08-05T22:20:47.156512981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\"" Aug 5 22:20:47.184125 containerd[1451]: time="2024-08-05T22:20:47.184058985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 22:20:49.568439 containerd[1451]: time="2024-08-05T22:20:49.568375952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:49.569496 containerd[1451]: time="2024-08-05T22:20:49.569446363Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=32204824" Aug 5 22:20:49.570578 containerd[1451]: time="2024-08-05T22:20:49.570537443Z" level=info msg="ImageCreate event name:\"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:49.573456 containerd[1451]: time="2024-08-05T22:20:49.573403892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:49.574887 containerd[1451]: time="2024-08-05T22:20:49.574831525Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"33754770\" in 2.390723567s" Aug 5 22:20:49.574887 containerd[1451]: time="2024-08-05T22:20:49.574874665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\"" Aug 5 22:20:49.596351 containerd[1451]: time="2024-08-05T22:20:49.596310144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 22:20:50.080790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:20:50.090506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:20:50.238772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:20:50.243365 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:20:50.314671 kubelet[1862]: E0805 22:20:50.314577 1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:20:50.322607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:20:50.322828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:20:51.241390 containerd[1451]: time="2024-08-05T22:20:51.241334451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:51.242180 containerd[1451]: time="2024-08-05T22:20:51.242141924Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=17320803" Aug 5 22:20:51.243608 containerd[1451]: time="2024-08-05T22:20:51.243575643Z" level=info msg="ImageCreate event name:\"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:51.246670 containerd[1451]: time="2024-08-05T22:20:51.246636716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:51.247649 containerd[1451]: time="2024-08-05T22:20:51.247614907Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"18870767\" in 1.651126249s" Aug 5 22:20:51.247704 containerd[1451]: time="2024-08-05T22:20:51.247660867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\"" Aug 5 22:20:51.271930 containerd[1451]: time="2024-08-05T22:20:51.271883940Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 22:20:52.321708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599185625.mount: Deactivated successfully. Aug 5 22:20:53.754119 containerd[1451]: time="2024-08-05T22:20:53.754041858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:53.755164 containerd[1451]: time="2024-08-05T22:20:53.755093773Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=28600088" Aug 5 22:20:53.756743 containerd[1451]: time="2024-08-05T22:20:53.756706440Z" level=info msg="ImageCreate event name:\"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:53.758976 containerd[1451]: time="2024-08-05T22:20:53.758940267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:53.759603 containerd[1451]: time="2024-08-05T22:20:53.759547634Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"28599107\" in 2.487620119s" Aug 5 22:20:53.759603 containerd[1451]: time="2024-08-05T22:20:53.759594461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\"" Aug 5 22:20:53.782405 containerd[1451]: time="2024-08-05T22:20:53.782349559Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:20:54.339817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521382558.mount: Deactivated successfully. Aug 5 22:20:55.255328 containerd[1451]: time="2024-08-05T22:20:55.255254067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:55.255948 containerd[1451]: time="2024-08-05T22:20:55.255838880Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Aug 5 22:20:55.257097 containerd[1451]: time="2024-08-05T22:20:55.257052302Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:55.260574 containerd[1451]: time="2024-08-05T22:20:55.260512972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:55.262502 containerd[1451]: time="2024-08-05T22:20:55.262453304Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.480056133s" Aug 5 22:20:55.262502 containerd[1451]: time="2024-08-05T22:20:55.262493585Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:20:55.287649 containerd[1451]: time="2024-08-05T22:20:55.287596655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:20:56.250080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894383661.mount: Deactivated successfully. Aug 5 22:20:56.256136 containerd[1451]: time="2024-08-05T22:20:56.256084098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:56.256834 containerd[1451]: time="2024-08-05T22:20:56.256783558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:20:56.257952 containerd[1451]: time="2024-08-05T22:20:56.257917614Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:56.260550 containerd[1451]: time="2024-08-05T22:20:56.260514385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:56.261378 containerd[1451]: time="2024-08-05T22:20:56.261337087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 973.700285ms" Aug 5 22:20:56.261422 containerd[1451]: time="2024-08-05T22:20:56.261379483Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:20:56.285404 containerd[1451]: time="2024-08-05T22:20:56.285355104Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:20:56.857151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587238545.mount: Deactivated successfully. Aug 5 22:21:00.330864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:21:00.337476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:00.519539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:00.526460 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:21:00.663265 kubelet[2009]: E0805 22:21:00.663032 2009 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:21:00.668941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:21:00.669187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:21:02.103880 containerd[1451]: time="2024-08-05T22:21:02.103800534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:02.104990 containerd[1451]: time="2024-08-05T22:21:02.104941685Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 5 22:21:02.106534 containerd[1451]: time="2024-08-05T22:21:02.106495443Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:02.109635 containerd[1451]: time="2024-08-05T22:21:02.109592489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:02.110909 containerd[1451]: time="2024-08-05T22:21:02.110877106Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.825478816s" Aug 5 22:21:02.110950 containerd[1451]: time="2024-08-05T22:21:02.110908076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:21:04.660299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:04.674567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:04.692670 systemd[1]: Reloading requested from client PID 2099 ('systemctl') (unit session-7.scope)... Aug 5 22:21:04.692689 systemd[1]: Reloading... Aug 5 22:21:04.764304 zram_generator::config[2138]: No configuration found. Aug 5 22:21:05.003446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:21:05.081453 systemd[1]: Reloading finished in 388 ms. Aug 5 22:21:05.143666 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:21:05.143763 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:21:05.144026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:05.145651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:05.300481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:05.305225 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:21:05.547888 kubelet[2184]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:05.549309 kubelet[2184]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:21:05.549309 kubelet[2184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:05.549309 kubelet[2184]: I0805 22:21:05.548367 2184 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:21:05.923054 kubelet[2184]: I0805 22:21:05.923009 2184 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:21:05.923054 kubelet[2184]: I0805 22:21:05.923042 2184 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:21:05.923334 kubelet[2184]: I0805 22:21:05.923325 2184 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:21:05.949626 kubelet[2184]: E0805 22:21:05.949596 2184 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.950476 kubelet[2184]: I0805 22:21:05.950461 2184 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:21:05.962731 kubelet[2184]: I0805 22:21:05.962696 2184 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:21:05.963594 kubelet[2184]: I0805 22:21:05.963571 2184 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:21:05.963745 kubelet[2184]: I0805 22:21:05.963723 2184 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:21:05.963829 kubelet[2184]: I0805 22:21:05.963749 2184 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:21:05.963829 kubelet[2184]: I0805 22:21:05.963757 2184 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:21:05.963907 kubelet[2184]: I0805 22:21:05.963884 2184 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:05.964007 kubelet[2184]: I0805 22:21:05.963977 2184 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:21:05.964007 kubelet[2184]: I0805 22:21:05.964000 2184 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:21:05.964065 kubelet[2184]: I0805 22:21:05.964027 2184 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:21:05.964065 kubelet[2184]: I0805 22:21:05.964044 2184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:21:05.964660 kubelet[2184]: W0805 22:21:05.964566 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.964660 kubelet[2184]: E0805 22:21:05.964623 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.964660 kubelet[2184]: W0805 22:21:05.964610 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.964660 kubelet[2184]: E0805 22:21:05.964654 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.965298 kubelet[2184]: I0805 22:21:05.965267 2184 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:21:05.967729 kubelet[2184]: I0805 22:21:05.967712 2184 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:21:05.968570 kubelet[2184]: W0805 22:21:05.968547 2184 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:21:05.969209 kubelet[2184]: I0805 22:21:05.969089 2184 server.go:1256] "Started kubelet" Aug 5 22:21:05.970821 kubelet[2184]: I0805 22:21:05.970799 2184 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:21:05.971253 kubelet[2184]: I0805 22:21:05.971236 2184 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:21:05.971398 kubelet[2184]: I0805 22:21:05.971303 2184 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:21:05.971560 kubelet[2184]: I0805 22:21:05.971545 2184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:21:05.972523 kubelet[2184]: I0805 22:21:05.972318 2184 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:21:05.973837 kubelet[2184]: E0805 22:21:05.973580 2184 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:21:05.973837 kubelet[2184]: I0805 22:21:05.973628 2184 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:21:05.973837 kubelet[2184]: I0805 22:21:05.973714 2184 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:21:05.975851 kubelet[2184]: I0805 22:21:05.974719 2184 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:21:05.975851 kubelet[2184]: W0805 22:21:05.975130 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.975851 kubelet[2184]: E0805 22:21:05.975173 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.976675 kubelet[2184]: E0805 22:21:05.976658 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="200ms" Aug 5 22:21:05.977211 kubelet[2184]: I0805 22:21:05.977195 2184 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:21:05.977545 kubelet[2184]: I0805 22:21:05.977526 2184 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:21:05.977800 kubelet[2184]: E0805 22:21:05.977553 2184 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:21:05.978692 kubelet[2184]: I0805 22:21:05.978660 2184 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:21:05.978919 kubelet[2184]: E0805 22:21:05.978892 2184 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.155:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.155:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f5351edcb334 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:21:05.969066804 +0000 UTC m=+0.658894790,LastTimestamp:2024-08-05 22:21:05.969066804 +0000 UTC m=+0.658894790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:21:05.988810 kubelet[2184]: I0805 22:21:05.988688 2184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:21:05.990305 kubelet[2184]: I0805 22:21:05.990049 2184 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:21:05.990305 kubelet[2184]: I0805 22:21:05.990071 2184 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:21:05.990305 kubelet[2184]: I0805 22:21:05.990087 2184 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:21:05.990305 kubelet[2184]: E0805 22:21:05.990137 2184 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:21:05.994534 kubelet[2184]: W0805 22:21:05.994483 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.994534 kubelet[2184]: E0805 22:21:05.994533 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:05.995220 kubelet[2184]: I0805 22:21:05.995195 2184 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:21:05.995220 kubelet[2184]: I0805 22:21:05.995211 2184 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:21:05.995347 kubelet[2184]: I0805 22:21:05.995229 2184 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:06.076690 kubelet[2184]: I0805 22:21:06.076659 2184 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:06.077072 kubelet[2184]: E0805 22:21:06.077033 2184 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" Aug 5 22:21:06.091228 kubelet[2184]: E0805 22:21:06.091203 2184 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:21:06.178244 kubelet[2184]: E0805 22:21:06.178107 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="400ms" Aug 5 22:21:06.278741 kubelet[2184]: I0805 22:21:06.278690 2184 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:06.279059 kubelet[2184]: E0805 22:21:06.279042 2184 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" Aug 5 22:21:06.292190 kubelet[2184]: E0805 22:21:06.292151 2184 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:21:06.562932 kubelet[2184]: I0805 22:21:06.562808 2184 policy_none.go:49] "None policy: Start" Aug 5 22:21:06.563503 kubelet[2184]: I0805 22:21:06.563488 2184 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:21:06.563551 kubelet[2184]: I0805 22:21:06.563509 2184 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:21:06.576179 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:21:06.579520 kubelet[2184]: E0805 22:21:06.579478 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="800ms" Aug 5 22:21:06.591170 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:21:06.594483 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:21:06.606445 kubelet[2184]: I0805 22:21:06.606400 2184 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:21:06.606911 kubelet[2184]: I0805 22:21:06.606807 2184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:21:06.607721 kubelet[2184]: E0805 22:21:06.607691 2184 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:21:06.680617 kubelet[2184]: I0805 22:21:06.680584 2184 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:06.681008 kubelet[2184]: E0805 22:21:06.680978 2184 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" Aug 5 22:21:06.693166 kubelet[2184]: I0805 22:21:06.693119 2184 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:21:06.694488 kubelet[2184]: I0805 22:21:06.694470 2184 topology_manager.go:215] "Topology Admit Handler" podUID="4ed83ff7ef9bd4217127b30e42301599" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:21:06.695170 kubelet[2184]: I0805 22:21:06.695153 2184 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:21:06.700693 systemd[1]: Created slice kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice - libcontainer container kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice. Aug 5 22:21:06.722021 systemd[1]: Created slice kubepods-burstable-pod4ed83ff7ef9bd4217127b30e42301599.slice - libcontainer container kubepods-burstable-pod4ed83ff7ef9bd4217127b30e42301599.slice. Aug 5 22:21:06.737927 systemd[1]: Created slice kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice - libcontainer container kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice. Aug 5 22:21:06.779909 kubelet[2184]: I0805 22:21:06.779857 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ed83ff7ef9bd4217127b30e42301599-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ed83ff7ef9bd4217127b30e42301599\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:06.779909 kubelet[2184]: I0805 22:21:06.779909 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ed83ff7ef9bd4217127b30e42301599-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4ed83ff7ef9bd4217127b30e42301599\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:06.780073 kubelet[2184]: I0805 22:21:06.779938 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:06.780073 kubelet[2184]: I0805 22:21:06.779972 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:06.780073 kubelet[2184]: I0805 22:21:06.780019 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:06.780073 kubelet[2184]: I0805 22:21:06.780066 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:06.780222 kubelet[2184]: I0805 22:21:06.780105 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:21:06.780222 kubelet[2184]: I0805 22:21:06.780132 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ed83ff7ef9bd4217127b30e42301599-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ed83ff7ef9bd4217127b30e42301599\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:06.780222 kubelet[2184]: I0805 22:21:06.780166 2184 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:06.952198 kubelet[2184]: W0805 22:21:06.952134 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:06.952198 kubelet[2184]: E0805 22:21:06.952197 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:07.019552 kubelet[2184]: E0805 22:21:07.019507 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:07.020236 containerd[1451]: time="2024-08-05T22:21:07.020196732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:07.036453 kubelet[2184]: E0805 22:21:07.036419 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:07.036800 containerd[1451]: time="2024-08-05T22:21:07.036770168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4ed83ff7ef9bd4217127b30e42301599,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:07.040060 kubelet[2184]: E0805 22:21:07.040021 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:07.040310 containerd[1451]: time="2024-08-05T22:21:07.040288507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:07.160532 kubelet[2184]: W0805 22:21:07.160451 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:07.160532 kubelet[2184]: E0805 22:21:07.160520 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:07.218706 kubelet[2184]: W0805 22:21:07.218535 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:07.218706 kubelet[2184]: E0805 22:21:07.218626 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:07.298695 kubelet[2184]: W0805 22:21:07.298617 2184 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:07.298695 kubelet[2184]: E0805 22:21:07.298695 2184 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:07.380487 kubelet[2184]: E0805 22:21:07.380433 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="1.6s" Aug 5 22:21:07.482129 kubelet[2184]: I0805 22:21:07.482017 2184 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:07.482367 kubelet[2184]: E0805 22:21:07.482350 2184 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" Aug 5 22:21:07.696533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072737853.mount: Deactivated successfully. Aug 5 22:21:07.702920 containerd[1451]: time="2024-08-05T22:21:07.702875453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:07.703965 containerd[1451]: time="2024-08-05T22:21:07.703918297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:07.704833 containerd[1451]: time="2024-08-05T22:21:07.704784828Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:21:07.705705 containerd[1451]: time="2024-08-05T22:21:07.705672000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:07.706703 containerd[1451]: time="2024-08-05T22:21:07.706644595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:21:07.707444 containerd[1451]: time="2024-08-05T22:21:07.707408620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:21:07.708362 containerd[1451]: time="2024-08-05T22:21:07.708327457Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:07.712187 containerd[1451]: time="2024-08-05T22:21:07.712149958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:21:07.713111 containerd[1451]: time="2024-08-05T22:21:07.713079030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.22897ms" Aug 5 22:21:07.714191 containerd[1451]: time="2024-08-05T22:21:07.714165298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.823161ms" Aug 5 22:21:07.715322 containerd[1451]: time="2024-08-05T22:21:07.715284737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.941212ms" Aug 5 22:21:07.945112 containerd[1451]: time="2024-08-05T22:21:07.944998301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:07.945112 containerd[1451]: time="2024-08-05T22:21:07.945070193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:07.945112 containerd[1451]: time="2024-08-05T22:21:07.945095404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:07.945307 containerd[1451]: time="2024-08-05T22:21:07.945113688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:07.950721 containerd[1451]: time="2024-08-05T22:21:07.949418296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:07.950721 containerd[1451]: time="2024-08-05T22:21:07.949483171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:07.950721 containerd[1451]: time="2024-08-05T22:21:07.949515590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:07.950721 containerd[1451]: time="2024-08-05T22:21:07.950025375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:07.953577 containerd[1451]: time="2024-08-05T22:21:07.953452285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:07.954090 containerd[1451]: time="2024-08-05T22:21:07.953539925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:07.954090 containerd[1451]: time="2024-08-05T22:21:07.953852616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:07.954090 containerd[1451]: time="2024-08-05T22:21:07.953878368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:08.003525 systemd[1]: Started cri-containerd-4c1074a2e43b51315dcfc4723a2c7cf6f487d5ded712ef9ca6273cd1d3692ae8.scope - libcontainer container 4c1074a2e43b51315dcfc4723a2c7cf6f487d5ded712ef9ca6273cd1d3692ae8. Aug 5 22:21:08.017175 systemd[1]: Started cri-containerd-561106fde0c04a6dd21b3a23a008cb23c1d57ce006729660933979eb319b575c.scope - libcontainer container 561106fde0c04a6dd21b3a23a008cb23c1d57ce006729660933979eb319b575c. Aug 5 22:21:08.020502 systemd[1]: Started cri-containerd-814c5e35f3d7819a0df4627834a65296b79d7ed4d5a12aecc73a1d83186c0107.scope - libcontainer container 814c5e35f3d7819a0df4627834a65296b79d7ed4d5a12aecc73a1d83186c0107. Aug 5 22:21:08.072455 containerd[1451]: time="2024-08-05T22:21:08.072408827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c1074a2e43b51315dcfc4723a2c7cf6f487d5ded712ef9ca6273cd1d3692ae8\"" Aug 5 22:21:08.074172 kubelet[2184]: E0805 22:21:08.073829 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:08.079514 containerd[1451]: time="2024-08-05T22:21:08.079468548Z" level=info msg="CreateContainer within sandbox \"4c1074a2e43b51315dcfc4723a2c7cf6f487d5ded712ef9ca6273cd1d3692ae8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:21:08.081886 containerd[1451]: time="2024-08-05T22:21:08.081844267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,} returns sandbox id \"561106fde0c04a6dd21b3a23a008cb23c1d57ce006729660933979eb319b575c\"" Aug 5 22:21:08.083234 kubelet[2184]: E0805 22:21:08.083207 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:08.087245 containerd[1451]: time="2024-08-05T22:21:08.087203697Z" level=info msg="CreateContainer within sandbox \"561106fde0c04a6dd21b3a23a008cb23c1d57ce006729660933979eb319b575c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:21:08.089668 containerd[1451]: time="2024-08-05T22:21:08.089561475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4ed83ff7ef9bd4217127b30e42301599,Namespace:kube-system,Attempt:0,} returns sandbox id \"814c5e35f3d7819a0df4627834a65296b79d7ed4d5a12aecc73a1d83186c0107\"" Aug 5 22:21:08.090537 kubelet[2184]: E0805 22:21:08.090505 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:08.092861 containerd[1451]: time="2024-08-05T22:21:08.092828806Z" level=info msg="CreateContainer within sandbox \"814c5e35f3d7819a0df4627834a65296b79d7ed4d5a12aecc73a1d83186c0107\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:21:08.109128 containerd[1451]: time="2024-08-05T22:21:08.109081452Z" level=info msg="CreateContainer within sandbox \"4c1074a2e43b51315dcfc4723a2c7cf6f487d5ded712ef9ca6273cd1d3692ae8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bac012707c29c09af93036de3a55c47e362cc85a5580430f5b7d456341b97642\"" Aug 5 22:21:08.109816 containerd[1451]: time="2024-08-05T22:21:08.109794297Z" level=info msg="StartContainer for \"bac012707c29c09af93036de3a55c47e362cc85a5580430f5b7d456341b97642\"" Aug 5 22:21:08.115908 containerd[1451]: time="2024-08-05T22:21:08.115850193Z" level=info msg="CreateContainer within sandbox \"561106fde0c04a6dd21b3a23a008cb23c1d57ce006729660933979eb319b575c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4836086d5434be873e36f7d0759d589ab3b8b66bc7146b0575f7e8c8c59d2df1\"" Aug 5 22:21:08.116976 containerd[1451]: time="2024-08-05T22:21:08.116544134Z" level=info msg="StartContainer for \"4836086d5434be873e36f7d0759d589ab3b8b66bc7146b0575f7e8c8c59d2df1\"" Aug 5 22:21:08.118306 containerd[1451]: time="2024-08-05T22:21:08.118195872Z" level=info msg="CreateContainer within sandbox \"814c5e35f3d7819a0df4627834a65296b79d7ed4d5a12aecc73a1d83186c0107\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b5c4fcccd0658c119b35447df42e2850e7fe8828c24d398b5e2a8306126eaf77\"" Aug 5 22:21:08.118641 containerd[1451]: time="2024-08-05T22:21:08.118615473Z" level=info msg="StartContainer for \"b5c4fcccd0658c119b35447df42e2850e7fe8828c24d398b5e2a8306126eaf77\"" Aug 5 22:21:08.138693 systemd[1]: Started cri-containerd-bac012707c29c09af93036de3a55c47e362cc85a5580430f5b7d456341b97642.scope - libcontainer container bac012707c29c09af93036de3a55c47e362cc85a5580430f5b7d456341b97642. Aug 5 22:21:08.143951 systemd[1]: Started cri-containerd-4836086d5434be873e36f7d0759d589ab3b8b66bc7146b0575f7e8c8c59d2df1.scope - libcontainer container 4836086d5434be873e36f7d0759d589ab3b8b66bc7146b0575f7e8c8c59d2df1. Aug 5 22:21:08.146039 kubelet[2184]: E0805 22:21:08.145967 2184 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.155:6443: connect: connection refused Aug 5 22:21:08.149831 systemd[1]: Started cri-containerd-b5c4fcccd0658c119b35447df42e2850e7fe8828c24d398b5e2a8306126eaf77.scope - libcontainer container b5c4fcccd0658c119b35447df42e2850e7fe8828c24d398b5e2a8306126eaf77. Aug 5 22:21:08.230772 containerd[1451]: time="2024-08-05T22:21:08.229474932Z" level=info msg="StartContainer for \"4836086d5434be873e36f7d0759d589ab3b8b66bc7146b0575f7e8c8c59d2df1\" returns successfully" Aug 5 22:21:08.230772 containerd[1451]: time="2024-08-05T22:21:08.229551261Z" level=info msg="StartContainer for \"bac012707c29c09af93036de3a55c47e362cc85a5580430f5b7d456341b97642\" returns successfully" Aug 5 22:21:08.257903 containerd[1451]: time="2024-08-05T22:21:08.257494444Z" level=info msg="StartContainer for \"b5c4fcccd0658c119b35447df42e2850e7fe8828c24d398b5e2a8306126eaf77\" returns successfully" Aug 5 22:21:09.005138 kubelet[2184]: E0805 22:21:09.004876 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:09.005709 kubelet[2184]: E0805 22:21:09.005679 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:09.007644 kubelet[2184]: E0805 22:21:09.007602 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:09.084669 kubelet[2184]: I0805 22:21:09.084631 2184 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:09.510305 kubelet[2184]: E0805 22:21:09.509670 2184 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 22:21:09.606174 kubelet[2184]: I0805 22:21:09.606128 2184 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:21:09.966952 kubelet[2184]: I0805 22:21:09.966881 2184 apiserver.go:52] "Watching apiserver" Aug 5 22:21:09.975191 kubelet[2184]: I0805 22:21:09.975108 2184 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:21:10.014568 kubelet[2184]: E0805 22:21:10.014534 2184 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:10.015165 kubelet[2184]: E0805 22:21:10.015135 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:11.014894 kubelet[2184]: E0805 22:21:11.014850 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:12.022183 kubelet[2184]: E0805 22:21:12.017034 2184 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:12.811138 systemd[1]: Reloading requested from client PID 2459 ('systemctl') (unit session-7.scope)... Aug 5 22:21:12.811154 systemd[1]: Reloading... Aug 5 22:21:12.900315 zram_generator::config[2502]: No configuration found. Aug 5 22:21:13.016420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:21:13.112364 systemd[1]: Reloading finished in 300 ms. Aug 5 22:21:13.158348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:13.176745 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:21:13.177069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:13.177125 systemd[1]: kubelet.service: Consumed 1.239s CPU time, 115.6M memory peak, 0B memory swap peak. Aug 5 22:21:13.189673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:21:13.351509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:21:13.363719 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:21:13.415612 kubelet[2541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:13.415612 kubelet[2541]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:21:13.415612 kubelet[2541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:21:13.416093 kubelet[2541]: I0805 22:21:13.415668 2541 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:21:13.420606 kubelet[2541]: I0805 22:21:13.420572 2541 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:21:13.420606 kubelet[2541]: I0805 22:21:13.420594 2541 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:21:13.420829 kubelet[2541]: I0805 22:21:13.420807 2541 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:21:13.422163 kubelet[2541]: I0805 22:21:13.422146 2541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:21:13.424029 kubelet[2541]: I0805 22:21:13.423979 2541 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:21:13.433790 kubelet[2541]: I0805 22:21:13.433762 2541 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:21:13.436338 kubelet[2541]: I0805 22:21:13.434149 2541 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:21:13.436338 kubelet[2541]: I0805 22:21:13.434344 2541 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:21:13.436338 kubelet[2541]: I0805 22:21:13.434369 2541 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:21:13.436338 kubelet[2541]: I0805 22:21:13.434379 2541 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:21:13.436338 kubelet[2541]: I0805 22:21:13.434417 2541 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:13.436338 kubelet[2541]: I0805 22:21:13.434526 2541 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:21:13.436676 kubelet[2541]: I0805 22:21:13.434543 2541 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:21:13.436676 kubelet[2541]: I0805 22:21:13.434620 2541 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:21:13.436676 kubelet[2541]: I0805 22:21:13.434641 2541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:21:13.439714 kubelet[2541]: I0805 22:21:13.439672 2541 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:21:13.440004 kubelet[2541]: I0805 22:21:13.439984 2541 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:21:13.441937 kubelet[2541]: I0805 22:21:13.440535 2541 server.go:1256] "Started kubelet" Aug 5 22:21:13.441937 kubelet[2541]: I0805 22:21:13.440846 2541 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:21:13.441937 kubelet[2541]: I0805 22:21:13.441799 2541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:21:13.444290 kubelet[2541]: I0805 22:21:13.444246 2541 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:21:13.444369 kubelet[2541]: I0805 22:21:13.444351 2541 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:21:13.444569 kubelet[2541]: I0805 22:21:13.444549 2541 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:21:13.444802 kubelet[2541]: I0805 22:21:13.444776 2541 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:21:13.446124 kubelet[2541]: I0805 22:21:13.446096 2541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:21:13.446942 kubelet[2541]: I0805 22:21:13.446361 2541 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:21:13.448843 kubelet[2541]: E0805 22:21:13.448793 2541 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:21:13.450189 kubelet[2541]: I0805 22:21:13.450167 2541 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:21:13.450189 kubelet[2541]: I0805 22:21:13.450182 2541 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:21:13.450298 kubelet[2541]: I0805 22:21:13.450264 2541 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:21:13.461800 kubelet[2541]: I0805 22:21:13.461764 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:21:13.464253 kubelet[2541]: I0805 22:21:13.464167 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:21:13.464253 kubelet[2541]: I0805 22:21:13.464191 2541 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:21:13.464253 kubelet[2541]: I0805 22:21:13.464206 2541 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:21:13.464382 kubelet[2541]: E0805 22:21:13.464261 2541 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:21:13.484780 kubelet[2541]: I0805 22:21:13.484748 2541 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:21:13.484780 kubelet[2541]: I0805 22:21:13.484773 2541 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:21:13.484780 kubelet[2541]: I0805 22:21:13.484789 2541 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:21:13.484957 kubelet[2541]: I0805 22:21:13.484946 2541 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:21:13.484988 kubelet[2541]: I0805 22:21:13.484965 2541 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:21:13.484988 kubelet[2541]: I0805 22:21:13.484972 2541 policy_none.go:49] "None policy: Start" Aug 5 22:21:13.485448 kubelet[2541]: I0805 22:21:13.485427 2541 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:21:13.485448 kubelet[2541]: I0805 22:21:13.485444 2541 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:21:13.485597 kubelet[2541]: I0805 22:21:13.485585 2541 state_mem.go:75] "Updated machine memory state" Aug 5 22:21:13.489427 kubelet[2541]: I0805 22:21:13.489346 2541 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:21:13.489659 kubelet[2541]: I0805 22:21:13.489583 2541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:21:13.549730 kubelet[2541]: I0805 22:21:13.549707 2541 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:21:13.554540 kubelet[2541]: I0805 22:21:13.554475 2541 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Aug 5 22:21:13.554624 kubelet[2541]: I0805 22:21:13.554573 2541 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:21:13.564888 kubelet[2541]: I0805 22:21:13.564845 2541 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:21:13.565027 kubelet[2541]: I0805 22:21:13.564957 2541 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:21:13.565027 kubelet[2541]: I0805 22:21:13.565003 2541 topology_manager.go:215] "Topology Admit Handler" podUID="4ed83ff7ef9bd4217127b30e42301599" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:21:13.572443 kubelet[2541]: E0805 22:21:13.572389 2541 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:13.644887 kubelet[2541]: I0805 22:21:13.644767 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:13.644887 kubelet[2541]: I0805 22:21:13.644821 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:13.644887 kubelet[2541]: I0805 22:21:13.644852 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:21:13.644887 kubelet[2541]: I0805 22:21:13.644887 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ed83ff7ef9bd4217127b30e42301599-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ed83ff7ef9bd4217127b30e42301599\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:13.645121 kubelet[2541]: I0805 22:21:13.644917 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ed83ff7ef9bd4217127b30e42301599-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4ed83ff7ef9bd4217127b30e42301599\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:13.645121 kubelet[2541]: I0805 22:21:13.644945 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:13.645121 kubelet[2541]: I0805 22:21:13.644992 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:13.645121 kubelet[2541]: I0805 22:21:13.645011 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:21:13.645121 kubelet[2541]: I0805 22:21:13.645026 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ed83ff7ef9bd4217127b30e42301599-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ed83ff7ef9bd4217127b30e42301599\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:13.871406 kubelet[2541]: E0805 22:21:13.871378 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:13.871665 kubelet[2541]: E0805 22:21:13.871647 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:13.873248 kubelet[2541]: E0805 22:21:13.873222 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:14.437147 kubelet[2541]: I0805 22:21:14.437108 2541 apiserver.go:52] "Watching apiserver" Aug 5 22:21:14.445337 kubelet[2541]: I0805 22:21:14.445310 2541 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:21:14.478296 kubelet[2541]: E0805 22:21:14.475896 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:14.478296 kubelet[2541]: E0805 22:21:14.476806 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:14.492293 kubelet[2541]: E0805 22:21:14.489423 2541 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:21:14.492293 kubelet[2541]: E0805 22:21:14.489919 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:14.719683 kubelet[2541]: I0805 22:21:14.719567 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.719525156 podStartE2EDuration="3.719525156s" podCreationTimestamp="2024-08-05 22:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:14.719363958 +0000 UTC m=+1.345489533" watchObservedRunningTime="2024-08-05 22:21:14.719525156 +0000 UTC m=+1.345650731" Aug 5 22:21:14.728297 kubelet[2541]: I0805 22:21:14.725684 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.725638498 podStartE2EDuration="1.725638498s" podCreationTimestamp="2024-08-05 22:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:14.725431635 +0000 UTC m=+1.351557210" watchObservedRunningTime="2024-08-05 22:21:14.725638498 +0000 UTC m=+1.351764073" Aug 5 22:21:14.731710 kubelet[2541]: I0805 22:21:14.731674 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.731618824 podStartE2EDuration="1.731618824s" podCreationTimestamp="2024-08-05 22:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:14.73155534 +0000 UTC m=+1.357680915" watchObservedRunningTime="2024-08-05 22:21:14.731618824 +0000 UTC m=+1.357744400" Aug 5 22:21:15.476636 kubelet[2541]: E0805 22:21:15.476605 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:17.702541 sudo[1632]: pam_unix(sudo:session): session closed for user root Aug 5 22:21:17.704151 sshd[1629]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:17.707737 systemd[1]: sshd@6-10.0.0.155:22-10.0.0.1:35080.service: Deactivated successfully. Aug 5 22:21:17.709726 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:21:17.709921 systemd[1]: session-7.scope: Consumed 4.909s CPU time, 139.3M memory peak, 0B memory swap peak. Aug 5 22:21:17.710323 systemd-logind[1436]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:21:17.711103 systemd-logind[1436]: Removed session 7. Aug 5 22:21:19.729552 kubelet[2541]: E0805 22:21:19.729520 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:19.901883 kubelet[2541]: E0805 22:21:19.901841 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:20.211336 kubelet[2541]: E0805 22:21:20.211302 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:20.485303 kubelet[2541]: E0805 22:21:20.483586 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:20.485303 kubelet[2541]: E0805 22:21:20.485030 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:20.485712 kubelet[2541]: E0805 22:21:20.485617 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:21.841378 update_engine[1438]: I0805 22:21:21.841323 1438 update_attempter.cc:509] Updating boot flags... Aug 5 22:21:21.867651 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2634) Aug 5 22:21:21.906306 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2634) Aug 5 22:21:21.961328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2634) Aug 5 22:21:25.007674 kubelet[2541]: I0805 22:21:25.007634 2541 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:21:25.008068 containerd[1451]: time="2024-08-05T22:21:25.007997557Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:21:25.008337 kubelet[2541]: I0805 22:21:25.008181 2541 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:21:25.604846 kubelet[2541]: I0805 22:21:25.604809 2541 topology_manager.go:215] "Topology Admit Handler" podUID="fa782086-9893-4a07-8817-59585b475d12" podNamespace="kube-system" podName="kube-proxy-qrpfd" Aug 5 22:21:25.612546 systemd[1]: Created slice kubepods-besteffort-podfa782086_9893_4a07_8817_59585b475d12.slice - libcontainer container kubepods-besteffort-podfa782086_9893_4a07_8817_59585b475d12.slice. Aug 5 22:21:25.619906 kubelet[2541]: I0805 22:21:25.619856 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa782086-9893-4a07-8817-59585b475d12-kube-proxy\") pod \"kube-proxy-qrpfd\" (UID: \"fa782086-9893-4a07-8817-59585b475d12\") " pod="kube-system/kube-proxy-qrpfd" Aug 5 22:21:25.619906 kubelet[2541]: I0805 22:21:25.619897 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa782086-9893-4a07-8817-59585b475d12-lib-modules\") pod \"kube-proxy-qrpfd\" (UID: \"fa782086-9893-4a07-8817-59585b475d12\") " pod="kube-system/kube-proxy-qrpfd" Aug 5 22:21:25.620061 kubelet[2541]: I0805 22:21:25.619941 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa782086-9893-4a07-8817-59585b475d12-xtables-lock\") pod \"kube-proxy-qrpfd\" (UID: \"fa782086-9893-4a07-8817-59585b475d12\") " pod="kube-system/kube-proxy-qrpfd" Aug 5 22:21:25.620061 kubelet[2541]: I0805 22:21:25.619965 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghn9\" (UniqueName: \"kubernetes.io/projected/fa782086-9893-4a07-8817-59585b475d12-kube-api-access-7ghn9\") pod \"kube-proxy-qrpfd\" (UID: \"fa782086-9893-4a07-8817-59585b475d12\") " pod="kube-system/kube-proxy-qrpfd" Aug 5 22:21:25.725704 kubelet[2541]: E0805 22:21:25.725667 2541 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:21:25.727064 kubelet[2541]: E0805 22:21:25.727041 2541 projected.go:200] Error preparing data for projected volume kube-api-access-7ghn9 for pod kube-system/kube-proxy-qrpfd: configmap "kube-root-ca.crt" not found Aug 5 22:21:25.727123 kubelet[2541]: E0805 22:21:25.727097 2541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa782086-9893-4a07-8817-59585b475d12-kube-api-access-7ghn9 podName:fa782086-9893-4a07-8817-59585b475d12 nodeName:}" failed. No retries permitted until 2024-08-05 22:21:26.227080244 +0000 UTC m=+12.853205819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7ghn9" (UniqueName: "kubernetes.io/projected/fa782086-9893-4a07-8817-59585b475d12-kube-api-access-7ghn9") pod "kube-proxy-qrpfd" (UID: "fa782086-9893-4a07-8817-59585b475d12") : configmap "kube-root-ca.crt" not found Aug 5 22:21:26.115404 kubelet[2541]: I0805 22:21:26.114566 2541 topology_manager.go:215] "Topology Admit Handler" podUID="4a2d84cf-6455-481f-809f-86bfeb15cc16" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-p22f5" Aug 5 22:21:26.123639 kubelet[2541]: I0805 22:21:26.122138 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a2d84cf-6455-481f-809f-86bfeb15cc16-var-lib-calico\") pod \"tigera-operator-76c4974c85-p22f5\" (UID: \"4a2d84cf-6455-481f-809f-86bfeb15cc16\") " pod="tigera-operator/tigera-operator-76c4974c85-p22f5" Aug 5 22:21:26.123639 kubelet[2541]: I0805 22:21:26.122213 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f7g8\" (UniqueName: \"kubernetes.io/projected/4a2d84cf-6455-481f-809f-86bfeb15cc16-kube-api-access-9f7g8\") pod \"tigera-operator-76c4974c85-p22f5\" (UID: \"4a2d84cf-6455-481f-809f-86bfeb15cc16\") " pod="tigera-operator/tigera-operator-76c4974c85-p22f5" Aug 5 22:21:26.123344 systemd[1]: Created slice kubepods-besteffort-pod4a2d84cf_6455_481f_809f_86bfeb15cc16.slice - libcontainer container kubepods-besteffort-pod4a2d84cf_6455_481f_809f_86bfeb15cc16.slice. Aug 5 22:21:26.427479 containerd[1451]: time="2024-08-05T22:21:26.427246115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-p22f5,Uid:4a2d84cf-6455-481f-809f-86bfeb15cc16,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:21:26.456732 containerd[1451]: time="2024-08-05T22:21:26.456634003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:26.456732 containerd[1451]: time="2024-08-05T22:21:26.456684360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:26.456732 containerd[1451]: time="2024-08-05T22:21:26.456702307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:26.456732 containerd[1451]: time="2024-08-05T22:21:26.456711767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:26.477460 systemd[1]: Started cri-containerd-1dd48b502b4465c41d7c3e2eb85058227126ed84cbfb7327275de9ac665742d0.scope - libcontainer container 1dd48b502b4465c41d7c3e2eb85058227126ed84cbfb7327275de9ac665742d0. Aug 5 22:21:26.519053 containerd[1451]: time="2024-08-05T22:21:26.518994047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-p22f5,Uid:4a2d84cf-6455-481f-809f-86bfeb15cc16,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1dd48b502b4465c41d7c3e2eb85058227126ed84cbfb7327275de9ac665742d0\"" Aug 5 22:21:26.521109 containerd[1451]: time="2024-08-05T22:21:26.521061592Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:21:26.522865 kubelet[2541]: E0805 22:21:26.522847 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:26.523627 containerd[1451]: time="2024-08-05T22:21:26.523364341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qrpfd,Uid:fa782086-9893-4a07-8817-59585b475d12,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:26.547243 containerd[1451]: time="2024-08-05T22:21:26.547130131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:26.547243 containerd[1451]: time="2024-08-05T22:21:26.547206120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:26.547243 containerd[1451]: time="2024-08-05T22:21:26.547223837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:26.547243 containerd[1451]: time="2024-08-05T22:21:26.547236324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:26.573538 systemd[1]: Started cri-containerd-6a105e6fef6ffc21a20d79704b05851060df5c590e5ff90167fff2fe05d8e5de.scope - libcontainer container 6a105e6fef6ffc21a20d79704b05851060df5c590e5ff90167fff2fe05d8e5de. Aug 5 22:21:26.598112 containerd[1451]: time="2024-08-05T22:21:26.598073957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qrpfd,Uid:fa782086-9893-4a07-8817-59585b475d12,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a105e6fef6ffc21a20d79704b05851060df5c590e5ff90167fff2fe05d8e5de\"" Aug 5 22:21:26.598759 kubelet[2541]: E0805 22:21:26.598739 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:26.603773 containerd[1451]: time="2024-08-05T22:21:26.603736360Z" level=info msg="CreateContainer within sandbox \"6a105e6fef6ffc21a20d79704b05851060df5c590e5ff90167fff2fe05d8e5de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:21:26.620122 containerd[1451]: time="2024-08-05T22:21:26.620075318Z" level=info msg="CreateContainer within sandbox \"6a105e6fef6ffc21a20d79704b05851060df5c590e5ff90167fff2fe05d8e5de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5175fae5859e644cb7c874b84ec9d9c9ced07237898ab0ea464224445fdc6cb8\"" Aug 5 22:21:26.620637 containerd[1451]: time="2024-08-05T22:21:26.620619510Z" level=info msg="StartContainer for \"5175fae5859e644cb7c874b84ec9d9c9ced07237898ab0ea464224445fdc6cb8\"" Aug 5 22:21:26.648464 systemd[1]: Started cri-containerd-5175fae5859e644cb7c874b84ec9d9c9ced07237898ab0ea464224445fdc6cb8.scope - libcontainer container 5175fae5859e644cb7c874b84ec9d9c9ced07237898ab0ea464224445fdc6cb8. Aug 5 22:21:26.680999 containerd[1451]: time="2024-08-05T22:21:26.680850610Z" level=info msg="StartContainer for \"5175fae5859e644cb7c874b84ec9d9c9ced07237898ab0ea464224445fdc6cb8\" returns successfully" Aug 5 22:21:27.494692 kubelet[2541]: E0805 22:21:27.494607 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:27.772960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250553632.mount: Deactivated successfully. Aug 5 22:21:28.470613 containerd[1451]: time="2024-08-05T22:21:28.470543085Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:28.471203 containerd[1451]: time="2024-08-05T22:21:28.471130786Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Aug 5 22:21:28.472419 containerd[1451]: time="2024-08-05T22:21:28.472368577Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:28.474655 containerd[1451]: time="2024-08-05T22:21:28.474593438Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:28.475254 containerd[1451]: time="2024-08-05T22:21:28.475217595Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.95411659s" Aug 5 22:21:28.475254 containerd[1451]: time="2024-08-05T22:21:28.475246245Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:21:28.476916 containerd[1451]: time="2024-08-05T22:21:28.476888405Z" level=info msg="CreateContainer within sandbox \"1dd48b502b4465c41d7c3e2eb85058227126ed84cbfb7327275de9ac665742d0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:21:28.492078 containerd[1451]: time="2024-08-05T22:21:28.492018925Z" level=info msg="CreateContainer within sandbox \"1dd48b502b4465c41d7c3e2eb85058227126ed84cbfb7327275de9ac665742d0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf7be35d538d8823f0ebe9744fcb69caa7f8085274bd04330e3cfa1b074cf0fb\"" Aug 5 22:21:28.492635 containerd[1451]: time="2024-08-05T22:21:28.492598238Z" level=info msg="StartContainer for \"cf7be35d538d8823f0ebe9744fcb69caa7f8085274bd04330e3cfa1b074cf0fb\"" Aug 5 22:21:28.527418 systemd[1]: Started cri-containerd-cf7be35d538d8823f0ebe9744fcb69caa7f8085274bd04330e3cfa1b074cf0fb.scope - libcontainer container cf7be35d538d8823f0ebe9744fcb69caa7f8085274bd04330e3cfa1b074cf0fb. Aug 5 22:21:28.619561 containerd[1451]: time="2024-08-05T22:21:28.619508027Z" level=info msg="StartContainer for \"cf7be35d538d8823f0ebe9744fcb69caa7f8085274bd04330e3cfa1b074cf0fb\" returns successfully" Aug 5 22:21:29.510052 kubelet[2541]: I0805 22:21:29.509970 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qrpfd" podStartSLOduration=4.508970796 podStartE2EDuration="4.508970796s" podCreationTimestamp="2024-08-05 22:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:27.503293143 +0000 UTC m=+14.129418718" watchObservedRunningTime="2024-08-05 22:21:29.508970796 +0000 UTC m=+16.135096371" Aug 5 22:21:29.510052 kubelet[2541]: I0805 22:21:29.510057 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-p22f5" podStartSLOduration=1.5548521690000001 podStartE2EDuration="3.510038515s" podCreationTimestamp="2024-08-05 22:21:26 +0000 UTC" firstStartedPulling="2024-08-05 22:21:26.520375093 +0000 UTC m=+13.146500668" lastFinishedPulling="2024-08-05 22:21:28.475561449 +0000 UTC m=+15.101687014" observedRunningTime="2024-08-05 22:21:29.508661807 +0000 UTC m=+16.134787382" watchObservedRunningTime="2024-08-05 22:21:29.510038515 +0000 UTC m=+16.136164090" Aug 5 22:21:31.434546 kubelet[2541]: I0805 22:21:31.434492 2541 topology_manager.go:215] "Topology Admit Handler" podUID="6c6e6cb1-46c7-44e3-8b21-5833766115f2" podNamespace="calico-system" podName="calico-typha-cf997697-bdf48" Aug 5 22:21:31.453338 systemd[1]: Created slice kubepods-besteffort-pod6c6e6cb1_46c7_44e3_8b21_5833766115f2.slice - libcontainer container kubepods-besteffort-pod6c6e6cb1_46c7_44e3_8b21_5833766115f2.slice. Aug 5 22:21:31.454105 kubelet[2541]: I0805 22:21:31.453950 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42sv\" (UniqueName: \"kubernetes.io/projected/6c6e6cb1-46c7-44e3-8b21-5833766115f2-kube-api-access-b42sv\") pod \"calico-typha-cf997697-bdf48\" (UID: \"6c6e6cb1-46c7-44e3-8b21-5833766115f2\") " pod="calico-system/calico-typha-cf997697-bdf48" Aug 5 22:21:31.454105 kubelet[2541]: I0805 22:21:31.453998 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6e6cb1-46c7-44e3-8b21-5833766115f2-tigera-ca-bundle\") pod \"calico-typha-cf997697-bdf48\" (UID: \"6c6e6cb1-46c7-44e3-8b21-5833766115f2\") " pod="calico-system/calico-typha-cf997697-bdf48" Aug 5 22:21:31.454105 kubelet[2541]: I0805 22:21:31.454024 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6c6e6cb1-46c7-44e3-8b21-5833766115f2-typha-certs\") pod \"calico-typha-cf997697-bdf48\" (UID: \"6c6e6cb1-46c7-44e3-8b21-5833766115f2\") " pod="calico-system/calico-typha-cf997697-bdf48" Aug 5 22:21:31.478623 kubelet[2541]: I0805 22:21:31.477861 2541 topology_manager.go:215] "Topology Admit Handler" podUID="b87dc31e-da62-4794-9498-efa057bc6788" podNamespace="calico-system" podName="calico-node-dkxlp" Aug 5 22:21:31.488494 systemd[1]: Created slice kubepods-besteffort-podb87dc31e_da62_4794_9498_efa057bc6788.slice - libcontainer container kubepods-besteffort-podb87dc31e_da62_4794_9498_efa057bc6788.slice. Aug 5 22:21:31.554428 kubelet[2541]: I0805 22:21:31.554373 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b87dc31e-da62-4794-9498-efa057bc6788-node-certs\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554605 kubelet[2541]: I0805 22:21:31.554444 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b87dc31e-da62-4794-9498-efa057bc6788-tigera-ca-bundle\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554711 kubelet[2541]: I0805 22:21:31.554691 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-var-lib-calico\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554775 kubelet[2541]: I0805 22:21:31.554726 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-cni-net-dir\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554775 kubelet[2541]: I0805 22:21:31.554755 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbc49\" (UniqueName: \"kubernetes.io/projected/b87dc31e-da62-4794-9498-efa057bc6788-kube-api-access-lbc49\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554842 kubelet[2541]: I0805 22:21:31.554796 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-lib-modules\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554842 kubelet[2541]: I0805 22:21:31.554822 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-cni-log-dir\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554917 kubelet[2541]: I0805 22:21:31.554850 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-flexvol-driver-host\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554917 kubelet[2541]: I0805 22:21:31.554891 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-cni-bin-dir\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554988 kubelet[2541]: I0805 22:21:31.554917 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-var-run-calico\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554988 kubelet[2541]: I0805 22:21:31.554944 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-xtables-lock\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.554988 kubelet[2541]: I0805 22:21:31.554969 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b87dc31e-da62-4794-9498-efa057bc6788-policysync\") pod \"calico-node-dkxlp\" (UID: \"b87dc31e-da62-4794-9498-efa057bc6788\") " pod="calico-system/calico-node-dkxlp" Aug 5 22:21:31.594158 kubelet[2541]: I0805 22:21:31.594119 2541 topology_manager.go:215] "Topology Admit Handler" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" podNamespace="calico-system" podName="csi-node-driver-gh8dm" Aug 5 22:21:31.595424 kubelet[2541]: E0805 22:21:31.595382 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:31.655599 kubelet[2541]: I0805 22:21:31.655547 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/688c5c2d-a5bb-4def-9fd3-0971268d2169-registration-dir\") pod \"csi-node-driver-gh8dm\" (UID: \"688c5c2d-a5bb-4def-9fd3-0971268d2169\") " pod="calico-system/csi-node-driver-gh8dm" Aug 5 22:21:31.655599 kubelet[2541]: I0805 22:21:31.655595 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/688c5c2d-a5bb-4def-9fd3-0971268d2169-varrun\") pod \"csi-node-driver-gh8dm\" (UID: \"688c5c2d-a5bb-4def-9fd3-0971268d2169\") " pod="calico-system/csi-node-driver-gh8dm" Aug 5 22:21:31.655599 kubelet[2541]: I0805 22:21:31.655615 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/688c5c2d-a5bb-4def-9fd3-0971268d2169-kubelet-dir\") pod \"csi-node-driver-gh8dm\" (UID: \"688c5c2d-a5bb-4def-9fd3-0971268d2169\") " pod="calico-system/csi-node-driver-gh8dm" Aug 5 22:21:31.655842 kubelet[2541]: I0805 22:21:31.655683 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/688c5c2d-a5bb-4def-9fd3-0971268d2169-socket-dir\") pod \"csi-node-driver-gh8dm\" (UID: \"688c5c2d-a5bb-4def-9fd3-0971268d2169\") " pod="calico-system/csi-node-driver-gh8dm" Aug 5 22:21:31.655842 kubelet[2541]: I0805 22:21:31.655702 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pc9v\" (UniqueName: \"kubernetes.io/projected/688c5c2d-a5bb-4def-9fd3-0971268d2169-kube-api-access-6pc9v\") pod \"csi-node-driver-gh8dm\" (UID: \"688c5c2d-a5bb-4def-9fd3-0971268d2169\") " pod="calico-system/csi-node-driver-gh8dm" Aug 5 22:21:31.660537 kubelet[2541]: E0805 22:21:31.659873 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.660537 kubelet[2541]: W0805 22:21:31.659898 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.660537 kubelet[2541]: E0805 22:21:31.659926 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.660537 kubelet[2541]: E0805 22:21:31.660101 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.660537 kubelet[2541]: W0805 22:21:31.660109 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.660537 kubelet[2541]: E0805 22:21:31.660132 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.660537 kubelet[2541]: E0805 22:21:31.660343 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.660537 kubelet[2541]: W0805 22:21:31.660350 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.660537 kubelet[2541]: E0805 22:21:31.660360 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.660537 kubelet[2541]: E0805 22:21:31.660532 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.660858 kubelet[2541]: W0805 22:21:31.660539 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.660858 kubelet[2541]: E0805 22:21:31.660549 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.667720 kubelet[2541]: E0805 22:21:31.667683 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.667720 kubelet[2541]: W0805 22:21:31.667711 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.667848 kubelet[2541]: E0805 22:21:31.667834 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.673542 kubelet[2541]: E0805 22:21:31.673522 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.673542 kubelet[2541]: W0805 22:21:31.673540 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.673618 kubelet[2541]: E0805 22:21:31.673559 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.757137 kubelet[2541]: E0805 22:21:31.757023 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.757137 kubelet[2541]: W0805 22:21:31.757049 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.757137 kubelet[2541]: E0805 22:21:31.757079 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.757444 kubelet[2541]: E0805 22:21:31.757412 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.757444 kubelet[2541]: W0805 22:21:31.757440 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.757542 kubelet[2541]: E0805 22:21:31.757472 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.757898 kubelet[2541]: E0805 22:21:31.757836 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.757898 kubelet[2541]: W0805 22:21:31.757847 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.757898 kubelet[2541]: E0805 22:21:31.757865 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.758641 kubelet[2541]: E0805 22:21:31.758297 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:31.758641 kubelet[2541]: E0805 22:21:31.758340 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.758641 kubelet[2541]: W0805 22:21:31.758350 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.758641 kubelet[2541]: E0805 22:21:31.758372 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.758864 kubelet[2541]: E0805 22:21:31.758657 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.758864 kubelet[2541]: W0805 22:21:31.758666 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.758864 kubelet[2541]: E0805 22:21:31.758807 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.759107 kubelet[2541]: E0805 22:21:31.759089 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.759107 kubelet[2541]: W0805 22:21:31.759102 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.759317 kubelet[2541]: E0805 22:21:31.759256 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.759317 kubelet[2541]: E0805 22:21:31.759310 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.759402 kubelet[2541]: W0805 22:21:31.759321 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.759427 containerd[1451]: time="2024-08-05T22:21:31.759345539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf997697-bdf48,Uid:6c6e6cb1-46c7-44e3-8b21-5833766115f2,Namespace:calico-system,Attempt:0,}" Aug 5 22:21:31.759908 kubelet[2541]: E0805 22:21:31.759408 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.759908 kubelet[2541]: E0805 22:21:31.759534 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.759908 kubelet[2541]: W0805 22:21:31.759541 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.759908 kubelet[2541]: E0805 22:21:31.759591 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.759908 kubelet[2541]: E0805 22:21:31.759719 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.759908 kubelet[2541]: W0805 22:21:31.759726 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.759908 kubelet[2541]: E0805 22:21:31.759748 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.759908 kubelet[2541]: E0805 22:21:31.759913 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.760132 kubelet[2541]: W0805 22:21:31.759923 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.760132 kubelet[2541]: E0805 22:21:31.759944 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.760132 kubelet[2541]: E0805 22:21:31.760118 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.760132 kubelet[2541]: W0805 22:21:31.760124 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.760226 kubelet[2541]: E0805 22:21:31.760150 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.760340 kubelet[2541]: E0805 22:21:31.760317 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.760340 kubelet[2541]: W0805 22:21:31.760327 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.760415 kubelet[2541]: E0805 22:21:31.760402 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.760610 kubelet[2541]: E0805 22:21:31.760594 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.760859 kubelet[2541]: W0805 22:21:31.760672 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.760859 kubelet[2541]: E0805 22:21:31.760735 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.761052 kubelet[2541]: E0805 22:21:31.761039 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.761122 kubelet[2541]: W0805 22:21:31.761110 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.761219 kubelet[2541]: E0805 22:21:31.761189 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.761453 kubelet[2541]: E0805 22:21:31.761432 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.761553 kubelet[2541]: W0805 22:21:31.761451 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.761553 kubelet[2541]: E0805 22:21:31.761494 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.761742 kubelet[2541]: E0805 22:21:31.761722 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.761742 kubelet[2541]: W0805 22:21:31.761739 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.762170 kubelet[2541]: E0805 22:21:31.761780 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.762170 kubelet[2541]: E0805 22:21:31.761999 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.762170 kubelet[2541]: W0805 22:21:31.762006 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.762293 kubelet[2541]: E0805 22:21:31.762283 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.762875 kubelet[2541]: E0805 22:21:31.762435 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.762875 kubelet[2541]: W0805 22:21:31.762445 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.762875 kubelet[2541]: E0805 22:21:31.762525 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.762875 kubelet[2541]: E0805 22:21:31.762696 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.762875 kubelet[2541]: W0805 22:21:31.762703 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.762875 kubelet[2541]: E0805 22:21:31.762734 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.763114 kubelet[2541]: E0805 22:21:31.762940 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.763114 kubelet[2541]: W0805 22:21:31.762949 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.763114 kubelet[2541]: E0805 22:21:31.762975 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.763224 kubelet[2541]: E0805 22:21:31.763199 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.763248 kubelet[2541]: W0805 22:21:31.763233 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.763286 kubelet[2541]: E0805 22:21:31.763257 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.763687 kubelet[2541]: E0805 22:21:31.763663 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.763687 kubelet[2541]: W0805 22:21:31.763682 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.763740 kubelet[2541]: E0805 22:21:31.763704 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.764108 kubelet[2541]: E0805 22:21:31.764090 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.764108 kubelet[2541]: W0805 22:21:31.764103 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.764158 kubelet[2541]: E0805 22:21:31.764120 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.764959 kubelet[2541]: E0805 22:21:31.764930 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.764959 kubelet[2541]: W0805 22:21:31.764951 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.765030 kubelet[2541]: E0805 22:21:31.764967 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.765254 kubelet[2541]: E0805 22:21:31.765231 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.765254 kubelet[2541]: W0805 22:21:31.765243 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.765254 kubelet[2541]: E0805 22:21:31.765253 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.772524 kubelet[2541]: E0805 22:21:31.772486 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:31.772524 kubelet[2541]: W0805 22:21:31.772515 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:31.772679 kubelet[2541]: E0805 22:21:31.772576 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:31.793809 kubelet[2541]: E0805 22:21:31.793546 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:31.795006 containerd[1451]: time="2024-08-05T22:21:31.794458681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dkxlp,Uid:b87dc31e-da62-4794-9498-efa057bc6788,Namespace:calico-system,Attempt:0,}" Aug 5 22:21:31.803406 containerd[1451]: time="2024-08-05T22:21:31.802521157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:31.803406 containerd[1451]: time="2024-08-05T22:21:31.802586090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:31.803406 containerd[1451]: time="2024-08-05T22:21:31.802605761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:31.803406 containerd[1451]: time="2024-08-05T22:21:31.802619448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:31.822574 systemd[1]: Started cri-containerd-e61fd2106f80b70a41d4f609b9619f8acce763876b34bdaa547c5f8b1d4c510f.scope - libcontainer container e61fd2106f80b70a41d4f609b9619f8acce763876b34bdaa547c5f8b1d4c510f. Aug 5 22:21:31.869542 containerd[1451]: time="2024-08-05T22:21:31.869485285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf997697-bdf48,Uid:6c6e6cb1-46c7-44e3-8b21-5833766115f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e61fd2106f80b70a41d4f609b9619f8acce763876b34bdaa547c5f8b1d4c510f\"" Aug 5 22:21:31.870498 kubelet[2541]: E0805 22:21:31.870455 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:31.871684 containerd[1451]: time="2024-08-05T22:21:31.871640468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:21:31.933097 containerd[1451]: time="2024-08-05T22:21:31.932738126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:31.933097 containerd[1451]: time="2024-08-05T22:21:31.932819142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:31.933097 containerd[1451]: time="2024-08-05T22:21:31.932846207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:31.933097 containerd[1451]: time="2024-08-05T22:21:31.932864906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:31.959617 systemd[1]: Started cri-containerd-935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6.scope - libcontainer container 935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6. Aug 5 22:21:31.992462 containerd[1451]: time="2024-08-05T22:21:31.992260671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dkxlp,Uid:b87dc31e-da62-4794-9498-efa057bc6788,Namespace:calico-system,Attempt:0,} returns sandbox id \"935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6\"" Aug 5 22:21:31.993036 kubelet[2541]: E0805 22:21:31.993016 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:33.471671 kubelet[2541]: E0805 22:21:33.471568 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:33.973365 containerd[1451]: time="2024-08-05T22:21:33.973299283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:33.974203 containerd[1451]: time="2024-08-05T22:21:33.974155999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:21:33.975546 containerd[1451]: time="2024-08-05T22:21:33.975504578Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:33.977690 containerd[1451]: time="2024-08-05T22:21:33.977659600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:33.978436 containerd[1451]: time="2024-08-05T22:21:33.978383496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.106702154s" Aug 5 22:21:33.978436 containerd[1451]: time="2024-08-05T22:21:33.978418386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:21:33.979375 containerd[1451]: time="2024-08-05T22:21:33.979162413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:21:33.988680 containerd[1451]: time="2024-08-05T22:21:33.987690384Z" level=info msg="CreateContainer within sandbox \"e61fd2106f80b70a41d4f609b9619f8acce763876b34bdaa547c5f8b1d4c510f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:21:34.007507 containerd[1451]: time="2024-08-05T22:21:34.007456343Z" level=info msg="CreateContainer within sandbox \"e61fd2106f80b70a41d4f609b9619f8acce763876b34bdaa547c5f8b1d4c510f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6bf81f2c867eec1051dff4a3c45396e0204e2155d44cd119d14add92b884678d\"" Aug 5 22:21:34.008320 containerd[1451]: time="2024-08-05T22:21:34.008162357Z" level=info msg="StartContainer for \"6bf81f2c867eec1051dff4a3c45396e0204e2155d44cd119d14add92b884678d\"" Aug 5 22:21:34.043575 systemd[1]: Started cri-containerd-6bf81f2c867eec1051dff4a3c45396e0204e2155d44cd119d14add92b884678d.scope - libcontainer container 6bf81f2c867eec1051dff4a3c45396e0204e2155d44cd119d14add92b884678d. Aug 5 22:21:34.091825 containerd[1451]: time="2024-08-05T22:21:34.091773550Z" level=info msg="StartContainer for \"6bf81f2c867eec1051dff4a3c45396e0204e2155d44cd119d14add92b884678d\" returns successfully" Aug 5 22:21:34.512827 kubelet[2541]: E0805 22:21:34.512793 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:34.522901 kubelet[2541]: I0805 22:21:34.522848 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-cf997697-bdf48" podStartSLOduration=1.415302084 podStartE2EDuration="3.522796598s" podCreationTimestamp="2024-08-05 22:21:31 +0000 UTC" firstStartedPulling="2024-08-05 22:21:31.87132015 +0000 UTC m=+18.497445725" lastFinishedPulling="2024-08-05 22:21:33.978814654 +0000 UTC m=+20.604940239" observedRunningTime="2024-08-05 22:21:34.522147759 +0000 UTC m=+21.148273354" watchObservedRunningTime="2024-08-05 22:21:34.522796598 +0000 UTC m=+21.148922173" Aug 5 22:21:34.575397 kubelet[2541]: E0805 22:21:34.575351 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.575397 kubelet[2541]: W0805 22:21:34.575382 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.575397 kubelet[2541]: E0805 22:21:34.575414 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.575692 kubelet[2541]: E0805 22:21:34.575677 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.575692 kubelet[2541]: W0805 22:21:34.575687 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.575779 kubelet[2541]: E0805 22:21:34.575700 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.575954 kubelet[2541]: E0805 22:21:34.575920 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.575954 kubelet[2541]: W0805 22:21:34.575931 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.575954 kubelet[2541]: E0805 22:21:34.575947 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.576187 kubelet[2541]: E0805 22:21:34.576166 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.576187 kubelet[2541]: W0805 22:21:34.576177 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.576321 kubelet[2541]: E0805 22:21:34.576190 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.576615 kubelet[2541]: E0805 22:21:34.576578 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.576615 kubelet[2541]: W0805 22:21:34.576602 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.576615 kubelet[2541]: E0805 22:21:34.576617 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.576884 kubelet[2541]: E0805 22:21:34.576854 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.576884 kubelet[2541]: W0805 22:21:34.576866 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.576884 kubelet[2541]: E0805 22:21:34.576879 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.577111 kubelet[2541]: E0805 22:21:34.577082 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.577111 kubelet[2541]: W0805 22:21:34.577093 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.577111 kubelet[2541]: E0805 22:21:34.577106 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.577344 kubelet[2541]: E0805 22:21:34.577327 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.577344 kubelet[2541]: W0805 22:21:34.577338 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.577430 kubelet[2541]: E0805 22:21:34.577352 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.577687 kubelet[2541]: E0805 22:21:34.577642 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.577687 kubelet[2541]: W0805 22:21:34.577676 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.577834 kubelet[2541]: E0805 22:21:34.577713 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.578016 kubelet[2541]: E0805 22:21:34.577997 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.578016 kubelet[2541]: W0805 22:21:34.578009 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.578077 kubelet[2541]: E0805 22:21:34.578021 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.578240 kubelet[2541]: E0805 22:21:34.578224 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.578240 kubelet[2541]: W0805 22:21:34.578235 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.578317 kubelet[2541]: E0805 22:21:34.578246 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.578552 kubelet[2541]: E0805 22:21:34.578523 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.578552 kubelet[2541]: W0805 22:21:34.578536 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.578552 kubelet[2541]: E0805 22:21:34.578548 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.578810 kubelet[2541]: E0805 22:21:34.578790 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.578810 kubelet[2541]: W0805 22:21:34.578801 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.578882 kubelet[2541]: E0805 22:21:34.578813 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.579050 kubelet[2541]: E0805 22:21:34.579031 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.579050 kubelet[2541]: W0805 22:21:34.579041 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.579121 kubelet[2541]: E0805 22:21:34.579054 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.579308 kubelet[2541]: E0805 22:21:34.579265 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.579308 kubelet[2541]: W0805 22:21:34.579302 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.579407 kubelet[2541]: E0805 22:21:34.579326 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.579665 kubelet[2541]: E0805 22:21:34.579649 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.579665 kubelet[2541]: W0805 22:21:34.579662 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.579721 kubelet[2541]: E0805 22:21:34.579675 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.579956 kubelet[2541]: E0805 22:21:34.579940 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.579956 kubelet[2541]: W0805 22:21:34.579951 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.580031 kubelet[2541]: E0805 22:21:34.579969 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.580412 kubelet[2541]: E0805 22:21:34.580371 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.580412 kubelet[2541]: W0805 22:21:34.580402 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.580507 kubelet[2541]: E0805 22:21:34.580438 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.580718 kubelet[2541]: E0805 22:21:34.580697 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.580718 kubelet[2541]: W0805 22:21:34.580708 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.580775 kubelet[2541]: E0805 22:21:34.580726 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.580946 kubelet[2541]: E0805 22:21:34.580931 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.580946 kubelet[2541]: W0805 22:21:34.580941 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.580989 kubelet[2541]: E0805 22:21:34.580956 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.581186 kubelet[2541]: E0805 22:21:34.581171 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.581186 kubelet[2541]: W0805 22:21:34.581181 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.581234 kubelet[2541]: E0805 22:21:34.581216 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.581437 kubelet[2541]: E0805 22:21:34.581419 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.581437 kubelet[2541]: W0805 22:21:34.581431 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.581488 kubelet[2541]: E0805 22:21:34.581467 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.581647 kubelet[2541]: E0805 22:21:34.581634 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.581647 kubelet[2541]: W0805 22:21:34.581645 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.581709 kubelet[2541]: E0805 22:21:34.581662 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.581880 kubelet[2541]: E0805 22:21:34.581869 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.581904 kubelet[2541]: W0805 22:21:34.581878 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.581904 kubelet[2541]: E0805 22:21:34.581894 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.582141 kubelet[2541]: E0805 22:21:34.582110 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.582141 kubelet[2541]: W0805 22:21:34.582127 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.582200 kubelet[2541]: E0805 22:21:34.582152 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.582515 kubelet[2541]: E0805 22:21:34.582501 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.582515 kubelet[2541]: W0805 22:21:34.582513 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.582586 kubelet[2541]: E0805 22:21:34.582533 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.582841 kubelet[2541]: E0805 22:21:34.582824 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.582841 kubelet[2541]: W0805 22:21:34.582839 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.582898 kubelet[2541]: E0805 22:21:34.582861 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.583118 kubelet[2541]: E0805 22:21:34.583096 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.583118 kubelet[2541]: W0805 22:21:34.583109 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.583162 kubelet[2541]: E0805 22:21:34.583126 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.583440 kubelet[2541]: E0805 22:21:34.583426 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.583469 kubelet[2541]: W0805 22:21:34.583439 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.583469 kubelet[2541]: E0805 22:21:34.583461 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.583708 kubelet[2541]: E0805 22:21:34.583697 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.583708 kubelet[2541]: W0805 22:21:34.583707 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.583755 kubelet[2541]: E0805 22:21:34.583725 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.583958 kubelet[2541]: E0805 22:21:34.583944 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.583979 kubelet[2541]: W0805 22:21:34.583957 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.583979 kubelet[2541]: E0805 22:21:34.583977 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.584224 kubelet[2541]: E0805 22:21:34.584214 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.584246 kubelet[2541]: W0805 22:21:34.584223 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.584246 kubelet[2541]: E0805 22:21:34.584235 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:34.584793 kubelet[2541]: E0805 22:21:34.584774 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:34.584793 kubelet[2541]: W0805 22:21:34.584785 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:34.584845 kubelet[2541]: E0805 22:21:34.584798 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.430112 containerd[1451]: time="2024-08-05T22:21:35.430049621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:35.431095 containerd[1451]: time="2024-08-05T22:21:35.431030168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:21:35.434055 containerd[1451]: time="2024-08-05T22:21:35.432454773Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:35.440329 containerd[1451]: time="2024-08-05T22:21:35.440160984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:35.441430 containerd[1451]: time="2024-08-05T22:21:35.441377358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.462157758s" Aug 5 22:21:35.441430 containerd[1451]: time="2024-08-05T22:21:35.441421568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:21:35.443227 containerd[1451]: time="2024-08-05T22:21:35.443200310Z" level=info msg="CreateContainer within sandbox \"935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:21:35.465413 kubelet[2541]: E0805 22:21:35.465363 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:35.514914 kubelet[2541]: I0805 22:21:35.514867 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:21:35.515571 kubelet[2541]: E0805 22:21:35.515544 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:35.579902 containerd[1451]: time="2024-08-05T22:21:35.579850880Z" level=info msg="CreateContainer within sandbox \"935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf\"" Aug 5 22:21:35.580312 containerd[1451]: time="2024-08-05T22:21:35.580245150Z" level=info msg="StartContainer for \"c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf\"" Aug 5 22:21:35.585545 kubelet[2541]: E0805 22:21:35.585513 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.585545 kubelet[2541]: W0805 22:21:35.585537 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.585725 kubelet[2541]: E0805 22:21:35.585567 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.585852 kubelet[2541]: E0805 22:21:35.585837 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.585894 kubelet[2541]: W0805 22:21:35.585850 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.585894 kubelet[2541]: E0805 22:21:35.585869 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.586127 kubelet[2541]: E0805 22:21:35.586109 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.586127 kubelet[2541]: W0805 22:21:35.586123 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.586180 kubelet[2541]: E0805 22:21:35.586138 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.586433 kubelet[2541]: E0805 22:21:35.586415 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.586433 kubelet[2541]: W0805 22:21:35.586430 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.586520 kubelet[2541]: E0805 22:21:35.586448 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.586700 kubelet[2541]: E0805 22:21:35.586684 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.586700 kubelet[2541]: W0805 22:21:35.586698 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.586788 kubelet[2541]: E0805 22:21:35.586763 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.587209 kubelet[2541]: E0805 22:21:35.587176 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.587293 kubelet[2541]: W0805 22:21:35.587204 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.587293 kubelet[2541]: E0805 22:21:35.587236 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.587567 kubelet[2541]: E0805 22:21:35.587548 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.587642 kubelet[2541]: W0805 22:21:35.587590 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.587642 kubelet[2541]: E0805 22:21:35.587605 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.588009 kubelet[2541]: E0805 22:21:35.587990 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.588009 kubelet[2541]: W0805 22:21:35.588007 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.588184 kubelet[2541]: E0805 22:21:35.588023 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.588424 kubelet[2541]: E0805 22:21:35.588395 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.588424 kubelet[2541]: W0805 22:21:35.588414 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.588670 kubelet[2541]: E0805 22:21:35.588433 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.589298 kubelet[2541]: E0805 22:21:35.589266 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.589298 kubelet[2541]: W0805 22:21:35.589292 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.589416 kubelet[2541]: E0805 22:21:35.589305 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.589574 kubelet[2541]: E0805 22:21:35.589540 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.589574 kubelet[2541]: W0805 22:21:35.589561 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.589664 kubelet[2541]: E0805 22:21:35.589620 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.589942 kubelet[2541]: E0805 22:21:35.589876 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.589942 kubelet[2541]: W0805 22:21:35.589936 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.590081 kubelet[2541]: E0805 22:21:35.589948 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.590520 kubelet[2541]: E0805 22:21:35.590503 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.590520 kubelet[2541]: W0805 22:21:35.590515 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.590609 kubelet[2541]: E0805 22:21:35.590527 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594292 kubelet[2541]: E0805 22:21:35.590767 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594292 kubelet[2541]: W0805 22:21:35.590782 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594292 kubelet[2541]: E0805 22:21:35.590799 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594292 kubelet[2541]: E0805 22:21:35.591034 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594292 kubelet[2541]: W0805 22:21:35.591045 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594292 kubelet[2541]: E0805 22:21:35.591059 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594292 kubelet[2541]: E0805 22:21:35.591409 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594292 kubelet[2541]: W0805 22:21:35.591464 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594292 kubelet[2541]: E0805 22:21:35.591482 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594292 kubelet[2541]: E0805 22:21:35.591917 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594673 kubelet[2541]: W0805 22:21:35.591927 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594673 kubelet[2541]: E0805 22:21:35.591949 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594673 kubelet[2541]: E0805 22:21:35.592213 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594673 kubelet[2541]: W0805 22:21:35.592223 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594673 kubelet[2541]: E0805 22:21:35.592237 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594673 kubelet[2541]: E0805 22:21:35.592514 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594673 kubelet[2541]: W0805 22:21:35.592550 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594673 kubelet[2541]: E0805 22:21:35.592566 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594673 kubelet[2541]: E0805 22:21:35.592790 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594673 kubelet[2541]: W0805 22:21:35.592798 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594988 kubelet[2541]: E0805 22:21:35.592839 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594988 kubelet[2541]: E0805 22:21:35.593100 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594988 kubelet[2541]: W0805 22:21:35.593107 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594988 kubelet[2541]: E0805 22:21:35.593122 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594988 kubelet[2541]: E0805 22:21:35.593727 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594988 kubelet[2541]: W0805 22:21:35.593735 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594988 kubelet[2541]: E0805 22:21:35.593778 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.594988 kubelet[2541]: E0805 22:21:35.594120 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.594988 kubelet[2541]: W0805 22:21:35.594128 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.594988 kubelet[2541]: E0805 22:21:35.594196 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.598314 kubelet[2541]: E0805 22:21:35.594471 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.598314 kubelet[2541]: W0805 22:21:35.594481 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.598314 kubelet[2541]: E0805 22:21:35.594552 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.598314 kubelet[2541]: E0805 22:21:35.594796 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.598314 kubelet[2541]: W0805 22:21:35.594804 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.598314 kubelet[2541]: E0805 22:21:35.594840 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.598314 kubelet[2541]: E0805 22:21:35.595454 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.598314 kubelet[2541]: W0805 22:21:35.595464 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.598314 kubelet[2541]: E0805 22:21:35.595536 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.598314 kubelet[2541]: E0805 22:21:35.595785 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.598669 kubelet[2541]: W0805 22:21:35.595793 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.598669 kubelet[2541]: E0805 22:21:35.595823 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.598669 kubelet[2541]: E0805 22:21:35.596060 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.598669 kubelet[2541]: W0805 22:21:35.596069 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.598669 kubelet[2541]: E0805 22:21:35.596080 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.598669 kubelet[2541]: E0805 22:21:35.596402 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.598669 kubelet[2541]: W0805 22:21:35.596412 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.598669 kubelet[2541]: E0805 22:21:35.596428 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.598669 kubelet[2541]: E0805 22:21:35.596641 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.598669 kubelet[2541]: W0805 22:21:35.596663 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.599068 kubelet[2541]: E0805 22:21:35.596699 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.599068 kubelet[2541]: E0805 22:21:35.596979 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.599068 kubelet[2541]: W0805 22:21:35.597013 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.599068 kubelet[2541]: E0805 22:21:35.597038 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.599068 kubelet[2541]: E0805 22:21:35.597492 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.599068 kubelet[2541]: W0805 22:21:35.597502 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.599068 kubelet[2541]: E0805 22:21:35.597520 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.599068 kubelet[2541]: E0805 22:21:35.597824 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:21:35.599068 kubelet[2541]: W0805 22:21:35.597841 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:21:35.599068 kubelet[2541]: E0805 22:21:35.597855 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:21:35.616498 systemd[1]: Started cri-containerd-c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf.scope - libcontainer container c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf. Aug 5 22:21:35.659979 containerd[1451]: time="2024-08-05T22:21:35.659916498Z" level=info msg="StartContainer for \"c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf\" returns successfully" Aug 5 22:21:35.668358 systemd[1]: cri-containerd-c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf.scope: Deactivated successfully. Aug 5 22:21:35.708306 containerd[1451]: time="2024-08-05T22:21:35.708146977Z" level=info msg="shim disconnected" id=c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf namespace=k8s.io Aug 5 22:21:35.708306 containerd[1451]: time="2024-08-05T22:21:35.708207941Z" level=warning msg="cleaning up after shim disconnected" id=c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf namespace=k8s.io Aug 5 22:21:35.708306 containerd[1451]: time="2024-08-05T22:21:35.708219033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:21:35.985112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7e4396f273dae541a14ec80cf4f8bb5f6da42e803ccc3b0d4e76d2aeddb1bbf-rootfs.mount: Deactivated successfully. Aug 5 22:21:36.517675 kubelet[2541]: E0805 22:21:36.517649 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:36.518178 containerd[1451]: time="2024-08-05T22:21:36.518120398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:21:37.465392 kubelet[2541]: E0805 22:21:37.465359 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:39.470592 kubelet[2541]: E0805 22:21:39.469499 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:40.348530 containerd[1451]: time="2024-08-05T22:21:40.348464233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:40.349256 containerd[1451]: time="2024-08-05T22:21:40.349187590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:21:40.350299 containerd[1451]: time="2024-08-05T22:21:40.350257702Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:40.352634 containerd[1451]: time="2024-08-05T22:21:40.352603245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:40.353435 containerd[1451]: time="2024-08-05T22:21:40.353382544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 3.835225402s" Aug 5 22:21:40.353435 containerd[1451]: time="2024-08-05T22:21:40.353412534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:21:40.356558 containerd[1451]: time="2024-08-05T22:21:40.356531224Z" level=info msg="CreateContainer within sandbox \"935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:21:40.372321 containerd[1451]: time="2024-08-05T22:21:40.372256066Z" level=info msg="CreateContainer within sandbox \"935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359\"" Aug 5 22:21:40.372845 containerd[1451]: time="2024-08-05T22:21:40.372780796Z" level=info msg="StartContainer for \"4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359\"" Aug 5 22:21:40.405412 systemd[1]: Started cri-containerd-4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359.scope - libcontainer container 4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359. Aug 5 22:21:40.437787 containerd[1451]: time="2024-08-05T22:21:40.437713482Z" level=info msg="StartContainer for \"4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359\" returns successfully" Aug 5 22:21:40.535339 kubelet[2541]: E0805 22:21:40.535311 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:41.464748 kubelet[2541]: E0805 22:21:41.464692 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:41.535013 kubelet[2541]: E0805 22:21:41.534983 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:41.562160 containerd[1451]: time="2024-08-05T22:21:41.562100909Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:21:41.565319 systemd[1]: cri-containerd-4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359.scope: Deactivated successfully. Aug 5 22:21:41.586541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359-rootfs.mount: Deactivated successfully. Aug 5 22:21:41.587404 kubelet[2541]: I0805 22:21:41.587072 2541 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:21:41.605676 kubelet[2541]: I0805 22:21:41.605634 2541 topology_manager.go:215] "Topology Admit Handler" podUID="10b22de4-c232-47e0-91fb-22ac20262794" podNamespace="calico-system" podName="calico-kube-controllers-6d9b85cbd7-fgmwx" Aug 5 22:21:41.605903 kubelet[2541]: I0805 22:21:41.605851 2541 topology_manager.go:215] "Topology Admit Handler" podUID="604142d0-8114-43a0-91aa-43bd2b50a943" podNamespace="kube-system" podName="coredns-76f75df574-xcv7z" Aug 5 22:21:41.608740 kubelet[2541]: I0805 22:21:41.605937 2541 topology_manager.go:215] "Topology Admit Handler" podUID="4ef4f720-0563-448a-af5b-bdacf6748482" podNamespace="kube-system" podName="coredns-76f75df574-g8prk" Aug 5 22:21:41.613419 systemd[1]: Created slice kubepods-burstable-pod4ef4f720_0563_448a_af5b_bdacf6748482.slice - libcontainer container kubepods-burstable-pod4ef4f720_0563_448a_af5b_bdacf6748482.slice. Aug 5 22:21:41.622858 systemd[1]: Created slice kubepods-burstable-pod604142d0_8114_43a0_91aa_43bd2b50a943.slice - libcontainer container kubepods-burstable-pod604142d0_8114_43a0_91aa_43bd2b50a943.slice. Aug 5 22:21:41.627406 systemd[1]: Created slice kubepods-besteffort-pod10b22de4_c232_47e0_91fb_22ac20262794.slice - libcontainer container kubepods-besteffort-pod10b22de4_c232_47e0_91fb_22ac20262794.slice. Aug 5 22:21:41.650561 kubelet[2541]: I0805 22:21:41.650458 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef4f720-0563-448a-af5b-bdacf6748482-config-volume\") pod \"coredns-76f75df574-g8prk\" (UID: \"4ef4f720-0563-448a-af5b-bdacf6748482\") " pod="kube-system/coredns-76f75df574-g8prk" Aug 5 22:21:41.650561 kubelet[2541]: I0805 22:21:41.650511 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqw9\" (UniqueName: \"kubernetes.io/projected/604142d0-8114-43a0-91aa-43bd2b50a943-kube-api-access-fjqw9\") pod \"coredns-76f75df574-xcv7z\" (UID: \"604142d0-8114-43a0-91aa-43bd2b50a943\") " pod="kube-system/coredns-76f75df574-xcv7z" Aug 5 22:21:41.650819 kubelet[2541]: I0805 22:21:41.650791 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604142d0-8114-43a0-91aa-43bd2b50a943-config-volume\") pod \"coredns-76f75df574-xcv7z\" (UID: \"604142d0-8114-43a0-91aa-43bd2b50a943\") " pod="kube-system/coredns-76f75df574-xcv7z" Aug 5 22:21:41.650963 kubelet[2541]: I0805 22:21:41.650929 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10b22de4-c232-47e0-91fb-22ac20262794-tigera-ca-bundle\") pod \"calico-kube-controllers-6d9b85cbd7-fgmwx\" (UID: \"10b22de4-c232-47e0-91fb-22ac20262794\") " pod="calico-system/calico-kube-controllers-6d9b85cbd7-fgmwx" Aug 5 22:21:41.651017 kubelet[2541]: I0805 22:21:41.650999 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmrz\" (UniqueName: \"kubernetes.io/projected/10b22de4-c232-47e0-91fb-22ac20262794-kube-api-access-vsmrz\") pod \"calico-kube-controllers-6d9b85cbd7-fgmwx\" (UID: \"10b22de4-c232-47e0-91fb-22ac20262794\") " pod="calico-system/calico-kube-controllers-6d9b85cbd7-fgmwx" Aug 5 22:21:41.651080 kubelet[2541]: I0805 22:21:41.651064 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9m7\" (UniqueName: \"kubernetes.io/projected/4ef4f720-0563-448a-af5b-bdacf6748482-kube-api-access-jm9m7\") pod \"coredns-76f75df574-g8prk\" (UID: \"4ef4f720-0563-448a-af5b-bdacf6748482\") " pod="kube-system/coredns-76f75df574-g8prk" Aug 5 22:21:41.943178 kubelet[2541]: E0805 22:21:41.943141 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:41.943178 kubelet[2541]: E0805 22:21:41.943177 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:41.943967 containerd[1451]: time="2024-08-05T22:21:41.943926360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b85cbd7-fgmwx,Uid:10b22de4-c232-47e0-91fb-22ac20262794,Namespace:calico-system,Attempt:0,}" Aug 5 22:21:41.944194 containerd[1451]: time="2024-08-05T22:21:41.943927453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xcv7z,Uid:604142d0-8114-43a0-91aa-43bd2b50a943,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:41.944299 containerd[1451]: time="2024-08-05T22:21:41.943926461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g8prk,Uid:4ef4f720-0563-448a-af5b-bdacf6748482,Namespace:kube-system,Attempt:0,}" Aug 5 22:21:42.133836 containerd[1451]: time="2024-08-05T22:21:42.133764432Z" level=info msg="shim disconnected" id=4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359 namespace=k8s.io Aug 5 22:21:42.133836 containerd[1451]: time="2024-08-05T22:21:42.133824432Z" level=warning msg="cleaning up after shim disconnected" id=4e988a9ac84654a0bba571976fcefa93b2901fe19aa27484ee86fc8c03897359 namespace=k8s.io Aug 5 22:21:42.133836 containerd[1451]: time="2024-08-05T22:21:42.133833400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:21:42.233005 containerd[1451]: time="2024-08-05T22:21:42.232889205Z" level=error msg="Failed to destroy network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.233124 containerd[1451]: time="2024-08-05T22:21:42.233062821Z" level=error msg="Failed to destroy network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.233433 containerd[1451]: time="2024-08-05T22:21:42.233387899Z" level=error msg="encountered an error cleaning up failed sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.233627 containerd[1451]: time="2024-08-05T22:21:42.233469752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b85cbd7-fgmwx,Uid:10b22de4-c232-47e0-91fb-22ac20262794,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.233627 containerd[1451]: time="2024-08-05T22:21:42.233419583Z" level=error msg="Failed to destroy network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.233804 kubelet[2541]: E0805 22:21:42.233770 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.233873 kubelet[2541]: E0805 22:21:42.233854 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9b85cbd7-fgmwx" Aug 5 22:21:42.234348 kubelet[2541]: E0805 22:21:42.233889 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9b85cbd7-fgmwx" Aug 5 22:21:42.234348 kubelet[2541]: E0805 22:21:42.233953 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d9b85cbd7-fgmwx_calico-system(10b22de4-c232-47e0-91fb-22ac20262794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d9b85cbd7-fgmwx_calico-system(10b22de4-c232-47e0-91fb-22ac20262794)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9b85cbd7-fgmwx" podUID="10b22de4-c232-47e0-91fb-22ac20262794" Aug 5 22:21:42.234348 kubelet[2541]: E0805 22:21:42.234222 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.234458 containerd[1451]: time="2024-08-05T22:21:42.234007444Z" level=error msg="encountered an error cleaning up failed sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.234458 containerd[1451]: time="2024-08-05T22:21:42.234049828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g8prk,Uid:4ef4f720-0563-448a-af5b-bdacf6748482,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.234458 containerd[1451]: time="2024-08-05T22:21:42.234149076Z" level=error msg="encountered an error cleaning up failed sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.234458 containerd[1451]: time="2024-08-05T22:21:42.234192853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xcv7z,Uid:604142d0-8114-43a0-91aa-43bd2b50a943,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.234558 kubelet[2541]: E0805 22:21:42.234250 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g8prk" Aug 5 22:21:42.234558 kubelet[2541]: E0805 22:21:42.234296 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-g8prk" Aug 5 22:21:42.234558 kubelet[2541]: E0805 22:21:42.234337 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g8prk_kube-system(4ef4f720-0563-448a-af5b-bdacf6748482)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g8prk_kube-system(4ef4f720-0563-448a-af5b-bdacf6748482)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g8prk" podUID="4ef4f720-0563-448a-af5b-bdacf6748482" Aug 5 22:21:42.234650 kubelet[2541]: E0805 22:21:42.234387 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.234650 kubelet[2541]: E0805 22:21:42.234423 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xcv7z" Aug 5 22:21:42.234650 kubelet[2541]: E0805 22:21:42.234443 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-xcv7z" Aug 5 22:21:42.234720 kubelet[2541]: E0805 22:21:42.234491 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xcv7z_kube-system(604142d0-8114-43a0-91aa-43bd2b50a943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xcv7z_kube-system(604142d0-8114-43a0-91aa-43bd2b50a943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xcv7z" podUID="604142d0-8114-43a0-91aa-43bd2b50a943" Aug 5 22:21:42.539103 kubelet[2541]: E0805 22:21:42.538975 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:42.539727 containerd[1451]: time="2024-08-05T22:21:42.539656344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:21:42.540644 kubelet[2541]: I0805 22:21:42.540145 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:21:42.541334 containerd[1451]: time="2024-08-05T22:21:42.540979009Z" level=info msg="StopPodSandbox for \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\"" Aug 5 22:21:42.541334 containerd[1451]: time="2024-08-05T22:21:42.541289529Z" level=info msg="Ensure that sandbox f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28 in task-service has been cleanup successfully" Aug 5 22:21:42.543232 kubelet[2541]: I0805 22:21:42.542555 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:21:42.543600 kubelet[2541]: I0805 22:21:42.543550 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:21:42.544395 containerd[1451]: time="2024-08-05T22:21:42.544356601Z" level=info msg="StopPodSandbox for \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\"" Aug 5 22:21:42.544616 containerd[1451]: time="2024-08-05T22:21:42.544587120Z" level=info msg="Ensure that sandbox d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391 in task-service has been cleanup successfully" Aug 5 22:21:42.546080 containerd[1451]: time="2024-08-05T22:21:42.546054754Z" level=info msg="StopPodSandbox for \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\"" Aug 5 22:21:42.546503 containerd[1451]: time="2024-08-05T22:21:42.546467327Z" level=info msg="Ensure that sandbox 0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd in task-service has been cleanup successfully" Aug 5 22:21:42.589075 containerd[1451]: time="2024-08-05T22:21:42.589026592Z" level=error msg="StopPodSandbox for \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\" failed" error="failed to destroy network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.589407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd-shm.mount: Deactivated successfully. Aug 5 22:21:42.590358 kubelet[2541]: E0805 22:21:42.590246 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:21:42.590984 kubelet[2541]: E0805 22:21:42.590853 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28"} Aug 5 22:21:42.590984 kubelet[2541]: E0805 22:21:42.590917 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ef4f720-0563-448a-af5b-bdacf6748482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:21:42.590984 kubelet[2541]: E0805 22:21:42.590957 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ef4f720-0563-448a-af5b-bdacf6748482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-g8prk" podUID="4ef4f720-0563-448a-af5b-bdacf6748482" Aug 5 22:21:42.591177 containerd[1451]: time="2024-08-05T22:21:42.590920906Z" level=error msg="StopPodSandbox for \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\" failed" error="failed to destroy network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.591218 kubelet[2541]: E0805 22:21:42.591145 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:21:42.591218 kubelet[2541]: E0805 22:21:42.591189 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391"} Aug 5 22:21:42.591318 kubelet[2541]: E0805 22:21:42.591225 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"604142d0-8114-43a0-91aa-43bd2b50a943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:21:42.591318 kubelet[2541]: E0805 22:21:42.591256 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"604142d0-8114-43a0-91aa-43bd2b50a943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-xcv7z" podUID="604142d0-8114-43a0-91aa-43bd2b50a943" Aug 5 22:21:42.593855 containerd[1451]: time="2024-08-05T22:21:42.593802360Z" level=error msg="StopPodSandbox for \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\" failed" error="failed to destroy network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:42.594029 kubelet[2541]: E0805 22:21:42.594001 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:21:42.594029 kubelet[2541]: E0805 22:21:42.594027 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd"} Aug 5 22:21:42.594098 kubelet[2541]: E0805 22:21:42.594055 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10b22de4-c232-47e0-91fb-22ac20262794\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:21:42.594098 kubelet[2541]: E0805 22:21:42.594077 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10b22de4-c232-47e0-91fb-22ac20262794\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9b85cbd7-fgmwx" podUID="10b22de4-c232-47e0-91fb-22ac20262794" Aug 5 22:21:43.471301 systemd[1]: Created slice kubepods-besteffort-pod688c5c2d_a5bb_4def_9fd3_0971268d2169.slice - libcontainer container kubepods-besteffort-pod688c5c2d_a5bb_4def_9fd3_0971268d2169.slice. Aug 5 22:21:43.473781 containerd[1451]: time="2024-08-05T22:21:43.473727920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gh8dm,Uid:688c5c2d-a5bb-4def-9fd3-0971268d2169,Namespace:calico-system,Attempt:0,}" Aug 5 22:21:43.540534 containerd[1451]: time="2024-08-05T22:21:43.540465029Z" level=error msg="Failed to destroy network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:43.543945 containerd[1451]: time="2024-08-05T22:21:43.543861075Z" level=error msg="encountered an error cleaning up failed sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:43.543945 containerd[1451]: time="2024-08-05T22:21:43.543931435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gh8dm,Uid:688c5c2d-a5bb-4def-9fd3-0971268d2169,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:43.543937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b-shm.mount: Deactivated successfully. Aug 5 22:21:43.544374 kubelet[2541]: E0805 22:21:43.544352 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:43.544462 kubelet[2541]: E0805 22:21:43.544413 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gh8dm" Aug 5 22:21:43.544462 kubelet[2541]: E0805 22:21:43.544442 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gh8dm" Aug 5 22:21:43.544848 kubelet[2541]: E0805 22:21:43.544827 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gh8dm_calico-system(688c5c2d-a5bb-4def-9fd3-0971268d2169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gh8dm_calico-system(688c5c2d-a5bb-4def-9fd3-0971268d2169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:43.546119 kubelet[2541]: I0805 22:21:43.546085 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:21:43.547062 containerd[1451]: time="2024-08-05T22:21:43.547025509Z" level=info msg="StopPodSandbox for \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\"" Aug 5 22:21:43.547542 containerd[1451]: time="2024-08-05T22:21:43.547501837Z" level=info msg="Ensure that sandbox 1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b in task-service has been cleanup successfully" Aug 5 22:21:43.578401 containerd[1451]: time="2024-08-05T22:21:43.578107310Z" level=error msg="StopPodSandbox for \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\" failed" error="failed to destroy network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:21:43.578541 kubelet[2541]: E0805 22:21:43.578343 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:21:43.578541 kubelet[2541]: E0805 22:21:43.578385 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b"} Aug 5 22:21:43.578541 kubelet[2541]: E0805 22:21:43.578419 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"688c5c2d-a5bb-4def-9fd3-0971268d2169\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:21:43.578541 kubelet[2541]: E0805 22:21:43.578449 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"688c5c2d-a5bb-4def-9fd3-0971268d2169\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gh8dm" podUID="688c5c2d-a5bb-4def-9fd3-0971268d2169" Aug 5 22:21:45.103986 systemd[1]: Started sshd@7-10.0.0.155:22-10.0.0.1:48630.service - OpenSSH per-connection server daemon (10.0.0.1:48630). Aug 5 22:21:45.152740 sshd[3566]: Accepted publickey for core from 10.0.0.1 port 48630 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:21:45.154239 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:45.159166 systemd-logind[1436]: New session 8 of user core. Aug 5 22:21:45.165397 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:21:45.288464 sshd[3566]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:45.291191 systemd[1]: sshd@7-10.0.0.155:22-10.0.0.1:48630.service: Deactivated successfully. Aug 5 22:21:45.293195 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:21:45.295778 systemd-logind[1436]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:21:45.297012 systemd-logind[1436]: Removed session 8. Aug 5 22:21:46.605387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707822193.mount: Deactivated successfully. Aug 5 22:21:46.924591 containerd[1451]: time="2024-08-05T22:21:46.924433096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:46.925423 containerd[1451]: time="2024-08-05T22:21:46.925384640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:21:46.926605 containerd[1451]: time="2024-08-05T22:21:46.926565858Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:46.951775 containerd[1451]: time="2024-08-05T22:21:46.951724808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:46.952223 containerd[1451]: time="2024-08-05T22:21:46.952199477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 4.412501361s" Aug 5 22:21:46.952294 containerd[1451]: time="2024-08-05T22:21:46.952228234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:21:46.961335 containerd[1451]: time="2024-08-05T22:21:46.961173404Z" level=info msg="CreateContainer within sandbox \"935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:21:46.980473 containerd[1451]: time="2024-08-05T22:21:46.980425449Z" level=info msg="CreateContainer within sandbox \"935eb03d4473ed58cec2b53622fe56541479cec29e02455720d2bed0805be8c6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3c2dc87a6b18d30f9c660a0a351c0ef80bcd95f805e04466f21a5dc57ea31d09\"" Aug 5 22:21:46.980886 containerd[1451]: time="2024-08-05T22:21:46.980865069Z" level=info msg="StartContainer for \"3c2dc87a6b18d30f9c660a0a351c0ef80bcd95f805e04466f21a5dc57ea31d09\"" Aug 5 22:21:47.046446 systemd[1]: Started cri-containerd-3c2dc87a6b18d30f9c660a0a351c0ef80bcd95f805e04466f21a5dc57ea31d09.scope - libcontainer container 3c2dc87a6b18d30f9c660a0a351c0ef80bcd95f805e04466f21a5dc57ea31d09. Aug 5 22:21:47.150750 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:21:47.150932 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:21:47.460462 containerd[1451]: time="2024-08-05T22:21:47.460402406Z" level=info msg="StartContainer for \"3c2dc87a6b18d30f9c660a0a351c0ef80bcd95f805e04466f21a5dc57ea31d09\" returns successfully" Aug 5 22:21:47.557490 kubelet[2541]: E0805 22:21:47.557462 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:47.567504 kubelet[2541]: I0805 22:21:47.567460 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dkxlp" podStartSLOduration=1.6087266740000001 podStartE2EDuration="16.567399432s" podCreationTimestamp="2024-08-05 22:21:31 +0000 UTC" firstStartedPulling="2024-08-05 22:21:31.993799169 +0000 UTC m=+18.619924744" lastFinishedPulling="2024-08-05 22:21:46.952471927 +0000 UTC m=+33.578597502" observedRunningTime="2024-08-05 22:21:47.56712529 +0000 UTC m=+34.193250865" watchObservedRunningTime="2024-08-05 22:21:47.567399432 +0000 UTC m=+34.193525007" Aug 5 22:21:50.299980 systemd[1]: Started sshd@8-10.0.0.155:22-10.0.0.1:48638.service - OpenSSH per-connection server daemon (10.0.0.1:48638). Aug 5 22:21:50.339182 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 48638 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:21:50.340702 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:50.344495 systemd-logind[1436]: New session 9 of user core. Aug 5 22:21:50.358402 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:21:50.470116 sshd[3769]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:50.473770 systemd[1]: sshd@8-10.0.0.155:22-10.0.0.1:48638.service: Deactivated successfully. Aug 5 22:21:50.475707 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:21:50.476331 systemd-logind[1436]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:21:50.477250 systemd-logind[1436]: Removed session 9. Aug 5 22:21:54.465871 containerd[1451]: time="2024-08-05T22:21:54.465806408Z" level=info msg="StopPodSandbox for \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\"" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.572 [INFO][3899] k8s.go 608: Cleaning up netns ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.572 [INFO][3899] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" iface="eth0" netns="/var/run/netns/cni-97eb69bd-a6df-411c-cb67-5df42051bb84" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.573 [INFO][3899] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" iface="eth0" netns="/var/run/netns/cni-97eb69bd-a6df-411c-cb67-5df42051bb84" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.573 [INFO][3899] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" iface="eth0" netns="/var/run/netns/cni-97eb69bd-a6df-411c-cb67-5df42051bb84" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.573 [INFO][3899] k8s.go 615: Releasing IP address(es) ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.573 [INFO][3899] utils.go 188: Calico CNI releasing IP address ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.644 [INFO][3907] ipam_plugin.go 411: Releasing address using handleID ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.644 [INFO][3907] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.644 [INFO][3907] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.652 [WARNING][3907] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.652 [INFO][3907] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.655 [INFO][3907] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:54.661660 containerd[1451]: 2024-08-05 22:21:54.658 [INFO][3899] k8s.go 621: Teardown processing complete. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:21:54.663062 containerd[1451]: time="2024-08-05T22:21:54.663010866Z" level=info msg="TearDown network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\" successfully" Aug 5 22:21:54.663062 containerd[1451]: time="2024-08-05T22:21:54.663050904Z" level=info msg="StopPodSandbox for \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\" returns successfully" Aug 5 22:21:54.664125 kubelet[2541]: E0805 22:21:54.663688 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:54.665362 containerd[1451]: time="2024-08-05T22:21:54.664844102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xcv7z,Uid:604142d0-8114-43a0-91aa-43bd2b50a943,Namespace:kube-system,Attempt:1,}" Aug 5 22:21:54.666115 systemd[1]: run-netns-cni\x2d97eb69bd\x2da6df\x2d411c\x2dcb67\x2d5df42051bb84.mount: Deactivated successfully. Aug 5 22:21:54.813779 systemd-networkd[1392]: calia1e90eb924b: Link UP Aug 5 22:21:54.814634 systemd-networkd[1392]: calia1e90eb924b: Gained carrier Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.719 [INFO][3915] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.731 [INFO][3915] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--xcv7z-eth0 coredns-76f75df574- kube-system 604142d0-8114-43a0-91aa-43bd2b50a943 812 0 2024-08-05 22:21:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-xcv7z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia1e90eb924b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.731 [INFO][3915] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.765 [INFO][3929] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" HandleID="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.774 [INFO][3929] ipam_plugin.go 264: Auto assigning IP ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" HandleID="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002952c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-xcv7z", "timestamp":"2024-08-05 22:21:54.765350526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.774 [INFO][3929] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.774 [INFO][3929] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.774 [INFO][3929] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.776 [INFO][3929] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.781 [INFO][3929] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.784 [INFO][3929] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.786 [INFO][3929] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.788 [INFO][3929] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.788 [INFO][3929] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.789 [INFO][3929] ipam.go 1685: Creating new handle: k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881 Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.795 [INFO][3929] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.800 [INFO][3929] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.800 [INFO][3929] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" host="localhost" Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.800 [INFO][3929] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:54.831354 containerd[1451]: 2024-08-05 22:21:54.800 [INFO][3929] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" HandleID="k8s-pod-network.c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.832070 containerd[1451]: 2024-08-05 22:21:54.804 [INFO][3915] k8s.go 386: Populated endpoint ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xcv7z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"604142d0-8114-43a0-91aa-43bd2b50a943", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-xcv7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1e90eb924b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:54.832070 containerd[1451]: 2024-08-05 22:21:54.805 [INFO][3915] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.832070 containerd[1451]: 2024-08-05 22:21:54.805 [INFO][3915] dataplane_linux.go 68: Setting the host side veth name to calia1e90eb924b ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.832070 containerd[1451]: 2024-08-05 22:21:54.814 [INFO][3915] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.832070 containerd[1451]: 2024-08-05 22:21:54.814 [INFO][3915] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xcv7z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"604142d0-8114-43a0-91aa-43bd2b50a943", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881", Pod:"coredns-76f75df574-xcv7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1e90eb924b", MAC:"4e:f6:52:08:ad:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:54.832070 containerd[1451]: 2024-08-05 22:21:54.825 [INFO][3915] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881" Namespace="kube-system" Pod="coredns-76f75df574-xcv7z" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:21:54.861731 containerd[1451]: time="2024-08-05T22:21:54.861580941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:54.861731 containerd[1451]: time="2024-08-05T22:21:54.861660317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:54.861731 containerd[1451]: time="2024-08-05T22:21:54.861697640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:54.861947 containerd[1451]: time="2024-08-05T22:21:54.861717369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:54.890468 systemd[1]: Started cri-containerd-c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881.scope - libcontainer container c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881. Aug 5 22:21:54.906590 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:21:54.934396 containerd[1451]: time="2024-08-05T22:21:54.934341039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xcv7z,Uid:604142d0-8114-43a0-91aa-43bd2b50a943,Namespace:kube-system,Attempt:1,} returns sandbox id \"c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881\"" Aug 5 22:21:54.935474 kubelet[2541]: E0805 22:21:54.935448 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:54.937560 containerd[1451]: time="2024-08-05T22:21:54.937515076Z" level=info msg="CreateContainer within sandbox \"c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:21:54.968215 containerd[1451]: time="2024-08-05T22:21:54.968146322Z" level=info msg="CreateContainer within sandbox \"c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2dbfd478392531767ee770baeb3dab2873ace0c30c22e3c78d75d64b12a04d06\"" Aug 5 22:21:54.968822 containerd[1451]: time="2024-08-05T22:21:54.968787921Z" level=info msg="StartContainer for \"2dbfd478392531767ee770baeb3dab2873ace0c30c22e3c78d75d64b12a04d06\"" Aug 5 22:21:55.001567 systemd[1]: Started cri-containerd-2dbfd478392531767ee770baeb3dab2873ace0c30c22e3c78d75d64b12a04d06.scope - libcontainer container 2dbfd478392531767ee770baeb3dab2873ace0c30c22e3c78d75d64b12a04d06. Aug 5 22:21:55.042362 containerd[1451]: time="2024-08-05T22:21:55.042295127Z" level=info msg="StartContainer for \"2dbfd478392531767ee770baeb3dab2873ace0c30c22e3c78d75d64b12a04d06\" returns successfully" Aug 5 22:21:55.465949 containerd[1451]: time="2024-08-05T22:21:55.465891681Z" level=info msg="StopPodSandbox for \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\"" Aug 5 22:21:55.482026 systemd[1]: Started sshd@9-10.0.0.155:22-10.0.0.1:57856.service - OpenSSH per-connection server daemon (10.0.0.1:57856). Aug 5 22:21:55.525637 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 57856 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:21:55.527787 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:55.531965 systemd-logind[1436]: New session 10 of user core. Aug 5 22:21:55.541487 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:21:55.570757 kubelet[2541]: E0805 22:21:55.570687 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:55.650932 kubelet[2541]: I0805 22:21:55.650420 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xcv7z" podStartSLOduration=29.65036721 podStartE2EDuration="29.65036721s" podCreationTimestamp="2024-08-05 22:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:55.648075959 +0000 UTC m=+42.274201534" watchObservedRunningTime="2024-08-05 22:21:55.65036721 +0000 UTC m=+42.276492785" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.515 [INFO][4073] k8s.go 608: Cleaning up netns ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.515 [INFO][4073] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" iface="eth0" netns="/var/run/netns/cni-682dbe57-afd8-ec8c-4578-942305c9ca59" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.515 [INFO][4073] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" iface="eth0" netns="/var/run/netns/cni-682dbe57-afd8-ec8c-4578-942305c9ca59" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.515 [INFO][4073] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" iface="eth0" netns="/var/run/netns/cni-682dbe57-afd8-ec8c-4578-942305c9ca59" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.516 [INFO][4073] k8s.go 615: Releasing IP address(es) ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.516 [INFO][4073] utils.go 188: Calico CNI releasing IP address ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.538 [INFO][4083] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.538 [INFO][4083] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.538 [INFO][4083] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.642 [WARNING][4083] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.642 [INFO][4083] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.645 [INFO][4083] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:55.654505 containerd[1451]: 2024-08-05 22:21:55.649 [INFO][4073] k8s.go 621: Teardown processing complete. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:21:55.654505 containerd[1451]: time="2024-08-05T22:21:55.654374555Z" level=info msg="TearDown network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\" successfully" Aug 5 22:21:55.654505 containerd[1451]: time="2024-08-05T22:21:55.654411959Z" level=info msg="StopPodSandbox for \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\" returns successfully" Aug 5 22:21:55.656465 containerd[1451]: time="2024-08-05T22:21:55.656322463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b85cbd7-fgmwx,Uid:10b22de4-c232-47e0-91fb-22ac20262794,Namespace:calico-system,Attempt:1,}" Aug 5 22:21:55.672640 systemd[1]: run-netns-cni\x2d682dbe57\x2dafd8\x2dec8c\x2d4578\x2d942305c9ca59.mount: Deactivated successfully. Aug 5 22:21:55.674549 sshd[4078]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:55.683262 systemd[1]: sshd@9-10.0.0.155:22-10.0.0.1:57856.service: Deactivated successfully. Aug 5 22:21:55.686804 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:21:55.689002 systemd-logind[1436]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:21:55.690698 systemd-logind[1436]: Removed session 10. Aug 5 22:21:55.771017 systemd-networkd[1392]: cali96a82cad8ce: Link UP Aug 5 22:21:55.771910 systemd-networkd[1392]: cali96a82cad8ce: Gained carrier Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.704 [INFO][4103] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.712 [INFO][4103] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0 calico-kube-controllers-6d9b85cbd7- calico-system 10b22de4-c232-47e0-91fb-22ac20262794 828 0 2024-08-05 22:21:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d9b85cbd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d9b85cbd7-fgmwx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali96a82cad8ce [] []}} ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.712 [INFO][4103] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.741 [INFO][4121] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" HandleID="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.748 [INFO][4121] ipam_plugin.go 264: Auto assigning IP ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" HandleID="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d9b85cbd7-fgmwx", "timestamp":"2024-08-05 22:21:55.741402943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.748 [INFO][4121] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.748 [INFO][4121] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.748 [INFO][4121] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.750 [INFO][4121] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.753 [INFO][4121] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.756 [INFO][4121] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.757 [INFO][4121] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.759 [INFO][4121] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.759 [INFO][4121] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.760 [INFO][4121] ipam.go 1685: Creating new handle: k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528 Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.762 [INFO][4121] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.766 [INFO][4121] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.766 [INFO][4121] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" host="localhost" Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.766 [INFO][4121] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:55.783592 containerd[1451]: 2024-08-05 22:21:55.766 [INFO][4121] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" HandleID="k8s-pod-network.ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.784451 containerd[1451]: 2024-08-05 22:21:55.769 [INFO][4103] k8s.go 386: Populated endpoint ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0", GenerateName:"calico-kube-controllers-6d9b85cbd7-", Namespace:"calico-system", SelfLink:"", UID:"10b22de4-c232-47e0-91fb-22ac20262794", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b85cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d9b85cbd7-fgmwx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96a82cad8ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:55.784451 containerd[1451]: 2024-08-05 22:21:55.769 [INFO][4103] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.784451 containerd[1451]: 2024-08-05 22:21:55.769 [INFO][4103] dataplane_linux.go 68: Setting the host side veth name to cali96a82cad8ce ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.784451 containerd[1451]: 2024-08-05 22:21:55.771 [INFO][4103] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.784451 containerd[1451]: 2024-08-05 22:21:55.771 [INFO][4103] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0", GenerateName:"calico-kube-controllers-6d9b85cbd7-", Namespace:"calico-system", SelfLink:"", UID:"10b22de4-c232-47e0-91fb-22ac20262794", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b85cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528", Pod:"calico-kube-controllers-6d9b85cbd7-fgmwx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96a82cad8ce", MAC:"f2:05:60:2e:c9:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:55.784451 containerd[1451]: 2024-08-05 22:21:55.779 [INFO][4103] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528" Namespace="calico-system" Pod="calico-kube-controllers-6d9b85cbd7-fgmwx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:21:55.804375 containerd[1451]: time="2024-08-05T22:21:55.804254808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:55.804375 containerd[1451]: time="2024-08-05T22:21:55.804330908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:55.804375 containerd[1451]: time="2024-08-05T22:21:55.804347500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:55.804603 containerd[1451]: time="2024-08-05T22:21:55.804374803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:55.833583 systemd[1]: Started cri-containerd-ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528.scope - libcontainer container ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528. Aug 5 22:21:55.855383 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:21:55.885067 containerd[1451]: time="2024-08-05T22:21:55.885025128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9b85cbd7-fgmwx,Uid:10b22de4-c232-47e0-91fb-22ac20262794,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528\"" Aug 5 22:21:55.886585 containerd[1451]: time="2024-08-05T22:21:55.886389863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:21:56.574007 kubelet[2541]: E0805 22:21:56.573974 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:56.812429 systemd-networkd[1392]: calia1e90eb924b: Gained IPv6LL Aug 5 22:21:57.383216 kubelet[2541]: I0805 22:21:57.383145 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:21:57.384082 kubelet[2541]: E0805 22:21:57.384051 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:57.465801 containerd[1451]: time="2024-08-05T22:21:57.465733334Z" level=info msg="StopPodSandbox for \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\"" Aug 5 22:21:57.466803 containerd[1451]: time="2024-08-05T22:21:57.465771819Z" level=info msg="StopPodSandbox for \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\"" Aug 5 22:21:57.575341 kubelet[2541]: E0805 22:21:57.575250 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.538 [INFO][4265] k8s.go 608: Cleaning up netns ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.538 [INFO][4265] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" iface="eth0" netns="/var/run/netns/cni-55ee55bd-6e46-e4d0-6d3f-3c80f3281e90" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.538 [INFO][4265] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" iface="eth0" netns="/var/run/netns/cni-55ee55bd-6e46-e4d0-6d3f-3c80f3281e90" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.541 [INFO][4265] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" iface="eth0" netns="/var/run/netns/cni-55ee55bd-6e46-e4d0-6d3f-3c80f3281e90" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.541 [INFO][4265] k8s.go 615: Releasing IP address(es) ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.541 [INFO][4265] utils.go 188: Calico CNI releasing IP address ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.582 [INFO][4288] ipam_plugin.go 411: Releasing address using handleID ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.582 [INFO][4288] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.582 [INFO][4288] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.592 [WARNING][4288] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.592 [INFO][4288] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.594 [INFO][4288] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:57.601403 containerd[1451]: 2024-08-05 22:21:57.599 [INFO][4265] k8s.go 621: Teardown processing complete. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:21:57.604556 containerd[1451]: time="2024-08-05T22:21:57.604240967Z" level=info msg="TearDown network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\" successfully" Aug 5 22:21:57.604556 containerd[1451]: time="2024-08-05T22:21:57.604306555Z" level=info msg="StopPodSandbox for \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\" returns successfully" Aug 5 22:21:57.604762 kubelet[2541]: E0805 22:21:57.604732 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:57.606720 containerd[1451]: time="2024-08-05T22:21:57.606694920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g8prk,Uid:4ef4f720-0563-448a-af5b-bdacf6748482,Namespace:kube-system,Attempt:1,}" Aug 5 22:21:57.606857 systemd[1]: run-netns-cni\x2d55ee55bd\x2d6e46\x2de4d0\x2d6d3f\x2d3c80f3281e90.mount: Deactivated successfully. Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.554 [INFO][4264] k8s.go 608: Cleaning up netns ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.554 [INFO][4264] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" iface="eth0" netns="/var/run/netns/cni-b454a2a1-76a3-7e90-9901-238183134e63" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.554 [INFO][4264] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" iface="eth0" netns="/var/run/netns/cni-b454a2a1-76a3-7e90-9901-238183134e63" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.556 [INFO][4264] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" iface="eth0" netns="/var/run/netns/cni-b454a2a1-76a3-7e90-9901-238183134e63" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.556 [INFO][4264] k8s.go 615: Releasing IP address(es) ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.556 [INFO][4264] utils.go 188: Calico CNI releasing IP address ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.601 [INFO][4307] ipam_plugin.go 411: Releasing address using handleID ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.601 [INFO][4307] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.601 [INFO][4307] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.608 [WARNING][4307] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.608 [INFO][4307] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.610 [INFO][4307] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:57.616034 containerd[1451]: 2024-08-05 22:21:57.613 [INFO][4264] k8s.go 621: Teardown processing complete. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:21:57.618950 containerd[1451]: time="2024-08-05T22:21:57.618395521Z" level=info msg="TearDown network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\" successfully" Aug 5 22:21:57.618950 containerd[1451]: time="2024-08-05T22:21:57.618442022Z" level=info msg="StopPodSandbox for \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\" returns successfully" Aug 5 22:21:57.619616 containerd[1451]: time="2024-08-05T22:21:57.619573206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gh8dm,Uid:688c5c2d-a5bb-4def-9fd3-0971268d2169,Namespace:calico-system,Attempt:1,}" Aug 5 22:21:57.620922 systemd[1]: run-netns-cni\x2db454a2a1\x2d76a3\x2d7e90\x2d9901\x2d238183134e63.mount: Deactivated successfully. Aug 5 22:21:57.645566 systemd-networkd[1392]: cali96a82cad8ce: Gained IPv6LL Aug 5 22:21:57.776242 systemd-networkd[1392]: cali43a74f2bd9f: Link UP Aug 5 22:21:57.776880 systemd-networkd[1392]: cali43a74f2bd9f: Gained carrier Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.675 [INFO][4335] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.691 [INFO][4335] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gh8dm-eth0 csi-node-driver- calico-system 688c5c2d-a5bb-4def-9fd3-0971268d2169 860 0 2024-08-05 22:21:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-gh8dm eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali43a74f2bd9f [] []}} ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.691 [INFO][4335] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.731 [INFO][4383] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" HandleID="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.742 [INFO][4383] ipam_plugin.go 264: Auto assigning IP ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" HandleID="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eac10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gh8dm", "timestamp":"2024-08-05 22:21:57.731121603 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.743 [INFO][4383] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.743 [INFO][4383] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.743 [INFO][4383] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.745 [INFO][4383] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.749 [INFO][4383] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.753 [INFO][4383] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.755 [INFO][4383] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.757 [INFO][4383] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.757 [INFO][4383] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.761 [INFO][4383] ipam.go 1685: Creating new handle: k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.765 [INFO][4383] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.770 [INFO][4383] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.770 [INFO][4383] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" host="localhost" Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.770 [INFO][4383] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:57.799257 containerd[1451]: 2024-08-05 22:21:57.770 [INFO][4383] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" HandleID="k8s-pod-network.cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.800109 containerd[1451]: 2024-08-05 22:21:57.773 [INFO][4335] k8s.go 386: Populated endpoint ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gh8dm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"688c5c2d-a5bb-4def-9fd3-0971268d2169", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gh8dm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali43a74f2bd9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:57.800109 containerd[1451]: 2024-08-05 22:21:57.773 [INFO][4335] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.800109 containerd[1451]: 2024-08-05 22:21:57.773 [INFO][4335] dataplane_linux.go 68: Setting the host side veth name to cali43a74f2bd9f ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.800109 containerd[1451]: 2024-08-05 22:21:57.777 [INFO][4335] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.800109 containerd[1451]: 2024-08-05 22:21:57.777 [INFO][4335] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gh8dm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"688c5c2d-a5bb-4def-9fd3-0971268d2169", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b", Pod:"csi-node-driver-gh8dm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali43a74f2bd9f", MAC:"4a:ec:76:24:5f:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:57.800109 containerd[1451]: 2024-08-05 22:21:57.795 [INFO][4335] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b" Namespace="calico-system" Pod="csi-node-driver-gh8dm" WorkloadEndpoint="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:21:57.834143 containerd[1451]: time="2024-08-05T22:21:57.833822484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:57.834143 containerd[1451]: time="2024-08-05T22:21:57.833885327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:57.834143 containerd[1451]: time="2024-08-05T22:21:57.833908583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:57.834143 containerd[1451]: time="2024-08-05T22:21:57.833926938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:57.857502 systemd[1]: Started cri-containerd-cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b.scope - libcontainer container cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b. Aug 5 22:21:57.870321 systemd-networkd[1392]: cali92085c360bd: Link UP Aug 5 22:21:57.870945 systemd-networkd[1392]: cali92085c360bd: Gained carrier Aug 5 22:21:57.875131 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.664 [INFO][4325] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.683 [INFO][4325] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--g8prk-eth0 coredns-76f75df574- kube-system 4ef4f720-0563-448a-af5b-bdacf6748482 859 0 2024-08-05 22:21:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-g8prk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92085c360bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.683 [INFO][4325] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.735 [INFO][4378] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" HandleID="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.745 [INFO][4378] ipam_plugin.go 264: Auto assigning IP ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" HandleID="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000438250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-g8prk", "timestamp":"2024-08-05 22:21:57.735908322 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.745 [INFO][4378] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.770 [INFO][4378] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.771 [INFO][4378] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.795 [INFO][4378] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.812 [INFO][4378] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.817 [INFO][4378] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.819 [INFO][4378] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.821 [INFO][4378] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.821 [INFO][4378] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.822 [INFO][4378] ipam.go 1685: Creating new handle: k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44 Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.825 [INFO][4378] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.859 [INFO][4378] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.859 [INFO][4378] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" host="localhost" Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.859 [INFO][4378] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:21:57.887539 containerd[1451]: 2024-08-05 22:21:57.859 [INFO][4378] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" HandleID="k8s-pod-network.41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.888221 containerd[1451]: 2024-08-05 22:21:57.865 [INFO][4325] k8s.go 386: Populated endpoint ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--g8prk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4ef4f720-0563-448a-af5b-bdacf6748482", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-g8prk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92085c360bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:57.888221 containerd[1451]: 2024-08-05 22:21:57.865 [INFO][4325] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.888221 containerd[1451]: 2024-08-05 22:21:57.866 [INFO][4325] dataplane_linux.go 68: Setting the host side veth name to cali92085c360bd ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.888221 containerd[1451]: 2024-08-05 22:21:57.871 [INFO][4325] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.888221 containerd[1451]: 2024-08-05 22:21:57.871 [INFO][4325] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--g8prk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4ef4f720-0563-448a-af5b-bdacf6748482", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44", Pod:"coredns-76f75df574-g8prk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92085c360bd", MAC:"da:46:a1:66:29:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:21:57.888221 containerd[1451]: 2024-08-05 22:21:57.883 [INFO][4325] k8s.go 500: Wrote updated endpoint to datastore ContainerID="41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44" Namespace="kube-system" Pod="coredns-76f75df574-g8prk" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:21:57.894622 containerd[1451]: time="2024-08-05T22:21:57.894578368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gh8dm,Uid:688c5c2d-a5bb-4def-9fd3-0971268d2169,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b\"" Aug 5 22:21:57.947433 containerd[1451]: time="2024-08-05T22:21:57.947005912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:21:57.947433 containerd[1451]: time="2024-08-05T22:21:57.947079135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:57.947433 containerd[1451]: time="2024-08-05T22:21:57.947107881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:21:57.947433 containerd[1451]: time="2024-08-05T22:21:57.947129584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:21:57.947659 kubelet[2541]: I0805 22:21:57.947600 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:21:57.949301 kubelet[2541]: E0805 22:21:57.949225 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:57.975569 systemd[1]: Started cri-containerd-41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44.scope - libcontainer container 41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44. Aug 5 22:21:57.994591 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:21:58.030396 containerd[1451]: time="2024-08-05T22:21:58.030348413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g8prk,Uid:4ef4f720-0563-448a-af5b-bdacf6748482,Namespace:kube-system,Attempt:1,} returns sandbox id \"41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44\"" Aug 5 22:21:58.031524 kubelet[2541]: E0805 22:21:58.031502 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:58.034704 containerd[1451]: time="2024-08-05T22:21:58.034605640Z" level=info msg="CreateContainer within sandbox \"41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:21:58.218830 containerd[1451]: time="2024-08-05T22:21:58.218674075Z" level=info msg="CreateContainer within sandbox \"41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05624ada8eeea6ec353df9c5ac6dbdb466af789ac821730c73d02ff9be83a3d2\"" Aug 5 22:21:58.220685 containerd[1451]: time="2024-08-05T22:21:58.219416568Z" level=info msg="StartContainer for \"05624ada8eeea6ec353df9c5ac6dbdb466af789ac821730c73d02ff9be83a3d2\"" Aug 5 22:21:58.238431 containerd[1451]: time="2024-08-05T22:21:58.238389506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:58.252203 containerd[1451]: time="2024-08-05T22:21:58.252026549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:21:58.252461 systemd[1]: Started cri-containerd-05624ada8eeea6ec353df9c5ac6dbdb466af789ac821730c73d02ff9be83a3d2.scope - libcontainer container 05624ada8eeea6ec353df9c5ac6dbdb466af789ac821730c73d02ff9be83a3d2. Aug 5 22:21:58.398146 containerd[1451]: time="2024-08-05T22:21:58.398072166Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:58.635565 containerd[1451]: time="2024-08-05T22:21:58.635502111Z" level=info msg="StartContainer for \"05624ada8eeea6ec353df9c5ac6dbdb466af789ac821730c73d02ff9be83a3d2\" returns successfully" Aug 5 22:21:58.638023 kubelet[2541]: E0805 22:21:58.638001 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:58.640390 kubelet[2541]: E0805 22:21:58.640365 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:58.692977 kubelet[2541]: I0805 22:21:58.692238 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-g8prk" podStartSLOduration=32.692205225 podStartE2EDuration="32.692205225s" podCreationTimestamp="2024-08-05 22:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:21:58.691791827 +0000 UTC m=+45.317917402" watchObservedRunningTime="2024-08-05 22:21:58.692205225 +0000 UTC m=+45.318330790" Aug 5 22:21:58.770960 containerd[1451]: time="2024-08-05T22:21:58.770902337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:21:58.771842 containerd[1451]: time="2024-08-05T22:21:58.771800574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.885384739s" Aug 5 22:21:58.771917 containerd[1451]: time="2024-08-05T22:21:58.771836885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:21:58.772350 containerd[1451]: time="2024-08-05T22:21:58.772326744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:21:58.780077 containerd[1451]: time="2024-08-05T22:21:58.780046512Z" level=info msg="CreateContainer within sandbox \"ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:21:59.245823 systemd-networkd[1392]: cali92085c360bd: Gained IPv6LL Aug 5 22:21:59.371885 systemd-networkd[1392]: vxlan.calico: Link UP Aug 5 22:21:59.372038 systemd-networkd[1392]: vxlan.calico: Gained carrier Aug 5 22:21:59.515044 containerd[1451]: time="2024-08-05T22:21:59.514933608Z" level=info msg="CreateContainer within sandbox \"ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8e67408451bd94baf4bfe2bcca3d3f4385fe8e9f9fd86d842f9d4cd60c0d398f\"" Aug 5 22:21:59.515821 containerd[1451]: time="2024-08-05T22:21:59.515797957Z" level=info msg="StartContainer for \"8e67408451bd94baf4bfe2bcca3d3f4385fe8e9f9fd86d842f9d4cd60c0d398f\"" Aug 5 22:21:59.546542 systemd[1]: Started cri-containerd-8e67408451bd94baf4bfe2bcca3d3f4385fe8e9f9fd86d842f9d4cd60c0d398f.scope - libcontainer container 8e67408451bd94baf4bfe2bcca3d3f4385fe8e9f9fd86d842f9d4cd60c0d398f. Aug 5 22:21:59.628163 containerd[1451]: time="2024-08-05T22:21:59.628111265Z" level=info msg="StartContainer for \"8e67408451bd94baf4bfe2bcca3d3f4385fe8e9f9fd86d842f9d4cd60c0d398f\" returns successfully" Aug 5 22:21:59.644103 kubelet[2541]: E0805 22:21:59.644061 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:21:59.692429 systemd-networkd[1392]: cali43a74f2bd9f: Gained IPv6LL Aug 5 22:21:59.758241 kubelet[2541]: I0805 22:21:59.757384 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d9b85cbd7-fgmwx" podStartSLOduration=25.871322195 podStartE2EDuration="28.757329805s" podCreationTimestamp="2024-08-05 22:21:31 +0000 UTC" firstStartedPulling="2024-08-05 22:21:55.886129252 +0000 UTC m=+42.512254837" lastFinishedPulling="2024-08-05 22:21:58.772136862 +0000 UTC m=+45.398262447" observedRunningTime="2024-08-05 22:21:59.688857188 +0000 UTC m=+46.314982763" watchObservedRunningTime="2024-08-05 22:21:59.757329805 +0000 UTC m=+46.383455380" Aug 5 22:22:00.646439 kubelet[2541]: E0805 22:22:00.646398 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:00.687255 systemd[1]: Started sshd@10-10.0.0.155:22-10.0.0.1:57860.service - OpenSSH per-connection server daemon (10.0.0.1:57860). Aug 5 22:22:00.722079 containerd[1451]: time="2024-08-05T22:22:00.721995598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:00.723726 containerd[1451]: time="2024-08-05T22:22:00.723596856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:22:00.724977 containerd[1451]: time="2024-08-05T22:22:00.724886344Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:00.729341 containerd[1451]: time="2024-08-05T22:22:00.728906247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:00.730311 containerd[1451]: time="2024-08-05T22:22:00.729722572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.957344419s" Aug 5 22:22:00.730311 containerd[1451]: time="2024-08-05T22:22:00.729769083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:22:00.739326 containerd[1451]: time="2024-08-05T22:22:00.737748941Z" level=info msg="CreateContainer within sandbox \"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:22:00.748564 sshd[4741]: Accepted publickey for core from 10.0.0.1 port 57860 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:00.750602 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:00.757579 systemd-logind[1436]: New session 11 of user core. Aug 5 22:22:00.763640 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:22:00.773382 containerd[1451]: time="2024-08-05T22:22:00.773337199Z" level=info msg="CreateContainer within sandbox \"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2bcdb1cc350a8f931a1036c4930ddb2d7ca0afe051a6f26227b0aa970ca8e334\"" Aug 5 22:22:00.774551 containerd[1451]: time="2024-08-05T22:22:00.774523786Z" level=info msg="StartContainer for \"2bcdb1cc350a8f931a1036c4930ddb2d7ca0afe051a6f26227b0aa970ca8e334\"" Aug 5 22:22:00.815552 systemd[1]: Started cri-containerd-2bcdb1cc350a8f931a1036c4930ddb2d7ca0afe051a6f26227b0aa970ca8e334.scope - libcontainer container 2bcdb1cc350a8f931a1036c4930ddb2d7ca0afe051a6f26227b0aa970ca8e334. Aug 5 22:22:00.844488 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Aug 5 22:22:00.866303 containerd[1451]: time="2024-08-05T22:22:00.865952263Z" level=info msg="StartContainer for \"2bcdb1cc350a8f931a1036c4930ddb2d7ca0afe051a6f26227b0aa970ca8e334\" returns successfully" Aug 5 22:22:00.868057 containerd[1451]: time="2024-08-05T22:22:00.867610482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:22:00.916081 sshd[4741]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:00.925291 systemd[1]: sshd@10-10.0.0.155:22-10.0.0.1:57860.service: Deactivated successfully. Aug 5 22:22:00.927222 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:22:00.928813 systemd-logind[1436]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:22:00.936252 systemd[1]: Started sshd@11-10.0.0.155:22-10.0.0.1:57870.service - OpenSSH per-connection server daemon (10.0.0.1:57870). Aug 5 22:22:00.938057 systemd-logind[1436]: Removed session 11. Aug 5 22:22:00.975227 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 57870 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:00.977025 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:00.983609 systemd-logind[1436]: New session 12 of user core. Aug 5 22:22:00.990502 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:22:01.149947 sshd[4787]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:01.159466 systemd[1]: sshd@11-10.0.0.155:22-10.0.0.1:57870.service: Deactivated successfully. Aug 5 22:22:01.161662 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:22:01.165597 systemd-logind[1436]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:22:01.173662 systemd[1]: Started sshd@12-10.0.0.155:22-10.0.0.1:57872.service - OpenSSH per-connection server daemon (10.0.0.1:57872). Aug 5 22:22:01.175249 systemd-logind[1436]: Removed session 12. Aug 5 22:22:01.205821 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 57872 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:01.207340 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:01.211404 systemd-logind[1436]: New session 13 of user core. Aug 5 22:22:01.227388 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:22:01.396939 sshd[4800]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:01.400342 systemd[1]: sshd@12-10.0.0.155:22-10.0.0.1:57872.service: Deactivated successfully. Aug 5 22:22:01.402100 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:22:01.402722 systemd-logind[1436]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:22:01.403730 systemd-logind[1436]: Removed session 13. Aug 5 22:22:02.521618 containerd[1451]: time="2024-08-05T22:22:02.521571935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:02.522183 containerd[1451]: time="2024-08-05T22:22:02.521615250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:22:02.523860 containerd[1451]: time="2024-08-05T22:22:02.523819170Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:02.525766 containerd[1451]: time="2024-08-05T22:22:02.525743976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:02.526341 containerd[1451]: time="2024-08-05T22:22:02.526315061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.6586692s" Aug 5 22:22:02.526398 containerd[1451]: time="2024-08-05T22:22:02.526343165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:22:02.531036 containerd[1451]: time="2024-08-05T22:22:02.530999029Z" level=info msg="CreateContainer within sandbox \"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:22:02.549042 containerd[1451]: time="2024-08-05T22:22:02.549000334Z" level=info msg="CreateContainer within sandbox \"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ebe0d954a75c2d7e22bcaeaf2e4c5ecb961005ccacb30ff3362cfef8cca53d4d\"" Aug 5 22:22:02.549482 containerd[1451]: time="2024-08-05T22:22:02.549457286Z" level=info msg="StartContainer for \"ebe0d954a75c2d7e22bcaeaf2e4c5ecb961005ccacb30ff3362cfef8cca53d4d\"" Aug 5 22:22:02.590446 systemd[1]: Started cri-containerd-ebe0d954a75c2d7e22bcaeaf2e4c5ecb961005ccacb30ff3362cfef8cca53d4d.scope - libcontainer container ebe0d954a75c2d7e22bcaeaf2e4c5ecb961005ccacb30ff3362cfef8cca53d4d. Aug 5 22:22:02.623327 containerd[1451]: time="2024-08-05T22:22:02.623232618Z" level=info msg="StartContainer for \"ebe0d954a75c2d7e22bcaeaf2e4c5ecb961005ccacb30ff3362cfef8cca53d4d\" returns successfully" Aug 5 22:22:02.661224 kubelet[2541]: I0805 22:22:02.661181 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-gh8dm" podStartSLOduration=27.031462544 podStartE2EDuration="31.661127349s" podCreationTimestamp="2024-08-05 22:21:31 +0000 UTC" firstStartedPulling="2024-08-05 22:21:57.896894161 +0000 UTC m=+44.523019736" lastFinishedPulling="2024-08-05 22:22:02.526558966 +0000 UTC m=+49.152684541" observedRunningTime="2024-08-05 22:22:02.661036512 +0000 UTC m=+49.287162077" watchObservedRunningTime="2024-08-05 22:22:02.661127349 +0000 UTC m=+49.287252924" Aug 5 22:22:03.520481 kubelet[2541]: I0805 22:22:03.520452 2541 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:22:03.521442 kubelet[2541]: I0805 22:22:03.521418 2541 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:22:06.412637 systemd[1]: Started sshd@13-10.0.0.155:22-10.0.0.1:41480.service - OpenSSH per-connection server daemon (10.0.0.1:41480). Aug 5 22:22:06.456161 sshd[4874]: Accepted publickey for core from 10.0.0.1 port 41480 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:06.458217 sshd[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:06.463266 systemd-logind[1436]: New session 14 of user core. Aug 5 22:22:06.472609 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:22:06.591818 sshd[4874]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:06.595479 systemd[1]: sshd@13-10.0.0.155:22-10.0.0.1:41480.service: Deactivated successfully. Aug 5 22:22:06.597326 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:22:06.597956 systemd-logind[1436]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:22:06.598748 systemd-logind[1436]: Removed session 14. Aug 5 22:22:11.604631 systemd[1]: Started sshd@14-10.0.0.155:22-10.0.0.1:41496.service - OpenSSH per-connection server daemon (10.0.0.1:41496). Aug 5 22:22:11.642780 sshd[4901]: Accepted publickey for core from 10.0.0.1 port 41496 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:11.645400 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:11.650826 systemd-logind[1436]: New session 15 of user core. Aug 5 22:22:11.666560 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:22:11.780391 sshd[4901]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:11.794399 systemd[1]: sshd@14-10.0.0.155:22-10.0.0.1:41496.service: Deactivated successfully. Aug 5 22:22:11.796874 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:22:11.798697 systemd-logind[1436]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:22:11.807824 systemd[1]: Started sshd@15-10.0.0.155:22-10.0.0.1:41502.service - OpenSSH per-connection server daemon (10.0.0.1:41502). Aug 5 22:22:11.808886 systemd-logind[1436]: Removed session 15. Aug 5 22:22:11.841022 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 41502 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:11.842819 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:11.847237 systemd-logind[1436]: New session 16 of user core. Aug 5 22:22:11.854462 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:22:12.066416 sshd[4915]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:12.076255 systemd[1]: sshd@15-10.0.0.155:22-10.0.0.1:41502.service: Deactivated successfully. Aug 5 22:22:12.078444 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:22:12.080507 systemd-logind[1436]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:22:12.086793 systemd[1]: Started sshd@16-10.0.0.155:22-10.0.0.1:41510.service - OpenSSH per-connection server daemon (10.0.0.1:41510). Aug 5 22:22:12.088515 systemd-logind[1436]: Removed session 16. Aug 5 22:22:12.128883 sshd[4928]: Accepted publickey for core from 10.0.0.1 port 41510 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:12.130614 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:12.135734 systemd-logind[1436]: New session 17 of user core. Aug 5 22:22:12.142509 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:22:13.452166 containerd[1451]: time="2024-08-05T22:22:13.452117200Z" level=info msg="StopPodSandbox for \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\"" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.494 [WARNING][4977] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--g8prk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4ef4f720-0563-448a-af5b-bdacf6748482", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44", Pod:"coredns-76f75df574-g8prk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92085c360bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.494 [INFO][4977] k8s.go 608: Cleaning up netns ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.494 [INFO][4977] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" iface="eth0" netns="" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.494 [INFO][4977] k8s.go 615: Releasing IP address(es) ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.494 [INFO][4977] utils.go 188: Calico CNI releasing IP address ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.530 [INFO][4987] ipam_plugin.go 411: Releasing address using handleID ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.530 [INFO][4987] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.530 [INFO][4987] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.539 [WARNING][4987] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.539 [INFO][4987] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.541 [INFO][4987] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:13.549191 containerd[1451]: 2024-08-05 22:22:13.544 [INFO][4977] k8s.go 621: Teardown processing complete. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.549191 containerd[1451]: time="2024-08-05T22:22:13.549053817Z" level=info msg="TearDown network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\" successfully" Aug 5 22:22:13.549191 containerd[1451]: time="2024-08-05T22:22:13.549084033Z" level=info msg="StopPodSandbox for \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\" returns successfully" Aug 5 22:22:13.550344 containerd[1451]: time="2024-08-05T22:22:13.550302765Z" level=info msg="RemovePodSandbox for \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\"" Aug 5 22:22:13.553601 containerd[1451]: time="2024-08-05T22:22:13.553403394Z" level=info msg="Forcibly stopping sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\"" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.601 [WARNING][5009] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--g8prk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4ef4f720-0563-448a-af5b-bdacf6748482", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41f96b2b3939a76345e63620cae2dd131f11afa236a5d6fc2ce5ec42759d2d44", Pod:"coredns-76f75df574-g8prk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92085c360bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.601 [INFO][5009] k8s.go 608: Cleaning up netns ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.601 [INFO][5009] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" iface="eth0" netns="" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.602 [INFO][5009] k8s.go 615: Releasing IP address(es) ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.602 [INFO][5009] utils.go 188: Calico CNI releasing IP address ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.630 [INFO][5017] ipam_plugin.go 411: Releasing address using handleID ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.630 [INFO][5017] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.630 [INFO][5017] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.642 [WARNING][5017] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.642 [INFO][5017] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" HandleID="k8s-pod-network.f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Workload="localhost-k8s-coredns--76f75df574--g8prk-eth0" Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.646 [INFO][5017] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:13.665880 containerd[1451]: 2024-08-05 22:22:13.654 [INFO][5009] k8s.go 621: Teardown processing complete. ContainerID="f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28" Aug 5 22:22:13.666332 containerd[1451]: time="2024-08-05T22:22:13.665942932Z" level=info msg="TearDown network for sandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\" successfully" Aug 5 22:22:13.709373 sshd[4928]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:13.722255 systemd[1]: sshd@16-10.0.0.155:22-10.0.0.1:41510.service: Deactivated successfully. Aug 5 22:22:13.724670 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:22:13.730818 systemd-logind[1436]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:22:13.737464 containerd[1451]: time="2024-08-05T22:22:13.737374380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:13.740018 systemd[1]: Started sshd@17-10.0.0.155:22-10.0.0.1:41516.service - OpenSSH per-connection server daemon (10.0.0.1:41516). Aug 5 22:22:13.747912 containerd[1451]: time="2024-08-05T22:22:13.746333198Z" level=info msg="RemovePodSandbox \"f8ad5fdef8ddf22ba2fa7907505e9fb9b78a0222e50a1c03f482b217f01fdd28\" returns successfully" Aug 5 22:22:13.748047 systemd-logind[1436]: Removed session 17. Aug 5 22:22:13.749710 containerd[1451]: time="2024-08-05T22:22:13.749593946Z" level=info msg="StopPodSandbox for \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\"" Aug 5 22:22:13.789074 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 41516 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:13.790817 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:13.797226 systemd-logind[1436]: New session 18 of user core. Aug 5 22:22:13.800450 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.795 [WARNING][5048] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0", GenerateName:"calico-kube-controllers-6d9b85cbd7-", Namespace:"calico-system", SelfLink:"", UID:"10b22de4-c232-47e0-91fb-22ac20262794", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b85cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528", Pod:"calico-kube-controllers-6d9b85cbd7-fgmwx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96a82cad8ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.795 [INFO][5048] k8s.go 608: Cleaning up netns ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.795 [INFO][5048] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" iface="eth0" netns="" Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.795 [INFO][5048] k8s.go 615: Releasing IP address(es) ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.795 [INFO][5048] utils.go 188: Calico CNI releasing IP address ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.823 [INFO][5055] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.823 [INFO][5055] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.823 [INFO][5055] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.831 [WARNING][5055] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.831 [INFO][5055] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.832 [INFO][5055] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:13.838175 containerd[1451]: 2024-08-05 22:22:13.835 [INFO][5048] k8s.go 621: Teardown processing complete. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.838857 containerd[1451]: time="2024-08-05T22:22:13.838746604Z" level=info msg="TearDown network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\" successfully" Aug 5 22:22:13.838857 containerd[1451]: time="2024-08-05T22:22:13.838787319Z" level=info msg="StopPodSandbox for \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\" returns successfully" Aug 5 22:22:13.839429 containerd[1451]: time="2024-08-05T22:22:13.839395424Z" level=info msg="RemovePodSandbox for \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\"" Aug 5 22:22:13.839505 containerd[1451]: time="2024-08-05T22:22:13.839439946Z" level=info msg="Forcibly stopping sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\"" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.887 [WARNING][5079] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0", GenerateName:"calico-kube-controllers-6d9b85cbd7-", Namespace:"calico-system", SelfLink:"", UID:"10b22de4-c232-47e0-91fb-22ac20262794", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9b85cbd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea713b9936b040e60ee73123043be1bf5e09bbae8fe513777ce4272630a5e528", Pod:"calico-kube-controllers-6d9b85cbd7-fgmwx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96a82cad8ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.888 [INFO][5079] k8s.go 608: Cleaning up netns ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.888 [INFO][5079] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" iface="eth0" netns="" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.888 [INFO][5079] k8s.go 615: Releasing IP address(es) ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.888 [INFO][5079] utils.go 188: Calico CNI releasing IP address ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.916 [INFO][5093] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.916 [INFO][5093] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.916 [INFO][5093] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.921 [WARNING][5093] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.922 [INFO][5093] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" HandleID="k8s-pod-network.0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Workload="localhost-k8s-calico--kube--controllers--6d9b85cbd7--fgmwx-eth0" Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.924 [INFO][5093] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:13.935741 containerd[1451]: 2024-08-05 22:22:13.929 [INFO][5079] k8s.go 621: Teardown processing complete. ContainerID="0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd" Aug 5 22:22:13.935741 containerd[1451]: time="2024-08-05T22:22:13.933318073Z" level=info msg="TearDown network for sandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\" successfully" Aug 5 22:22:13.938397 containerd[1451]: time="2024-08-05T22:22:13.938342928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:13.938474 containerd[1451]: time="2024-08-05T22:22:13.938425682Z" level=info msg="RemovePodSandbox \"0d7d8e0815caf1849a92233cdc7e0e361a56c66ef2abb659e37e1d227fd20cfd\" returns successfully" Aug 5 22:22:13.938984 containerd[1451]: time="2024-08-05T22:22:13.938962763Z" level=info msg="StopPodSandbox for \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\"" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:13.988 [WARNING][5115] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gh8dm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"688c5c2d-a5bb-4def-9fd3-0971268d2169", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b", Pod:"csi-node-driver-gh8dm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali43a74f2bd9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:13.988 [INFO][5115] k8s.go 608: Cleaning up netns ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:13.988 [INFO][5115] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" iface="eth0" netns="" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:13.988 [INFO][5115] k8s.go 615: Releasing IP address(es) ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:13.988 [INFO][5115] utils.go 188: Calico CNI releasing IP address ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:14.015 [INFO][5123] ipam_plugin.go 411: Releasing address using handleID ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:14.015 [INFO][5123] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:14.015 [INFO][5123] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:14.022 [WARNING][5123] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:14.022 [INFO][5123] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:14.024 [INFO][5123] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:14.030191 containerd[1451]: 2024-08-05 22:22:14.026 [INFO][5115] k8s.go 621: Teardown processing complete. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.030191 containerd[1451]: time="2024-08-05T22:22:14.030135294Z" level=info msg="TearDown network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\" successfully" Aug 5 22:22:14.030191 containerd[1451]: time="2024-08-05T22:22:14.030167002Z" level=info msg="StopPodSandbox for \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\" returns successfully" Aug 5 22:22:14.031083 containerd[1451]: time="2024-08-05T22:22:14.030741534Z" level=info msg="RemovePodSandbox for \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\"" Aug 5 22:22:14.031083 containerd[1451]: time="2024-08-05T22:22:14.030774045Z" level=info msg="Forcibly stopping sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\"" Aug 5 22:22:14.082026 sshd[5030]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:14.091327 systemd[1]: sshd@17-10.0.0.155:22-10.0.0.1:41516.service: Deactivated successfully. Aug 5 22:22:14.094399 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:22:14.095483 systemd-logind[1436]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:22:14.110756 systemd[1]: Started sshd@18-10.0.0.155:22-10.0.0.1:38550.service - OpenSSH per-connection server daemon (10.0.0.1:38550). Aug 5 22:22:14.112564 systemd-logind[1436]: Removed session 18. Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.078 [WARNING][5145] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gh8dm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"688c5c2d-a5bb-4def-9fd3-0971268d2169", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf01828fb2254126412e7640d5760719a6cf5114811a52d6accb83914646561b", Pod:"csi-node-driver-gh8dm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali43a74f2bd9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.079 [INFO][5145] k8s.go 608: Cleaning up netns ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.079 [INFO][5145] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" iface="eth0" netns="" Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.079 [INFO][5145] k8s.go 615: Releasing IP address(es) ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.079 [INFO][5145] utils.go 188: Calico CNI releasing IP address ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.106 [INFO][5152] ipam_plugin.go 411: Releasing address using handleID ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.106 [INFO][5152] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.106 [INFO][5152] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.115 [WARNING][5152] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.115 [INFO][5152] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" HandleID="k8s-pod-network.1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Workload="localhost-k8s-csi--node--driver--gh8dm-eth0" Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.119 [INFO][5152] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:14.123805 containerd[1451]: 2024-08-05 22:22:14.121 [INFO][5145] k8s.go 621: Teardown processing complete. ContainerID="1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b" Aug 5 22:22:14.124389 containerd[1451]: time="2024-08-05T22:22:14.124340626Z" level=info msg="TearDown network for sandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\" successfully" Aug 5 22:22:14.129719 containerd[1451]: time="2024-08-05T22:22:14.129634114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:14.129719 containerd[1451]: time="2024-08-05T22:22:14.129730073Z" level=info msg="RemovePodSandbox \"1c247f6880709129922f067b4859b1605cbb99f40fc294e9dbac3c9ab23abd2b\" returns successfully" Aug 5 22:22:14.130414 containerd[1451]: time="2024-08-05T22:22:14.130341954Z" level=info msg="StopPodSandbox for \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\"" Aug 5 22:22:14.149029 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 38550 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:14.150941 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:14.157796 systemd-logind[1436]: New session 19 of user core. Aug 5 22:22:14.165514 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.177 [WARNING][5180] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xcv7z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"604142d0-8114-43a0-91aa-43bd2b50a943", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881", Pod:"coredns-76f75df574-xcv7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1e90eb924b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.178 [INFO][5180] k8s.go 608: Cleaning up netns ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.178 [INFO][5180] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" iface="eth0" netns="" Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.178 [INFO][5180] k8s.go 615: Releasing IP address(es) ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.178 [INFO][5180] utils.go 188: Calico CNI releasing IP address ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.200 [INFO][5189] ipam_plugin.go 411: Releasing address using handleID ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.200 [INFO][5189] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.200 [INFO][5189] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.206 [WARNING][5189] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.206 [INFO][5189] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.208 [INFO][5189] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:14.212991 containerd[1451]: 2024-08-05 22:22:14.210 [INFO][5180] k8s.go 621: Teardown processing complete. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.213573 containerd[1451]: time="2024-08-05T22:22:14.213047109Z" level=info msg="TearDown network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\" successfully" Aug 5 22:22:14.213573 containerd[1451]: time="2024-08-05T22:22:14.213129032Z" level=info msg="StopPodSandbox for \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\" returns successfully" Aug 5 22:22:14.213721 containerd[1451]: time="2024-08-05T22:22:14.213697703Z" level=info msg="RemovePodSandbox for \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\"" Aug 5 22:22:14.213721 containerd[1451]: time="2024-08-05T22:22:14.213724684Z" level=info msg="Forcibly stopping sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\"" Aug 5 22:22:14.295211 sshd[5161]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.262 [WARNING][5219] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--xcv7z-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"604142d0-8114-43a0-91aa-43bd2b50a943", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c48d77a069ceb3de6bec8118d736ba12d34389ba41ee37a725458fd71465c881", Pod:"coredns-76f75df574-xcv7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1e90eb924b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.262 [INFO][5219] k8s.go 608: Cleaning up netns ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.262 [INFO][5219] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" iface="eth0" netns="" Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.262 [INFO][5219] k8s.go 615: Releasing IP address(es) ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.262 [INFO][5219] utils.go 188: Calico CNI releasing IP address ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.285 [INFO][5229] ipam_plugin.go 411: Releasing address using handleID ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.285 [INFO][5229] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.285 [INFO][5229] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.291 [WARNING][5229] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.291 [INFO][5229] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" HandleID="k8s-pod-network.d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Workload="localhost-k8s-coredns--76f75df574--xcv7z-eth0" Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.293 [INFO][5229] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:14.301288 containerd[1451]: 2024-08-05 22:22:14.296 [INFO][5219] k8s.go 621: Teardown processing complete. ContainerID="d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391" Aug 5 22:22:14.301861 containerd[1451]: time="2024-08-05T22:22:14.301341283Z" level=info msg="TearDown network for sandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\" successfully" Aug 5 22:22:14.301409 systemd[1]: sshd@18-10.0.0.155:22-10.0.0.1:38550.service: Deactivated successfully. Aug 5 22:22:14.304081 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:22:14.306013 systemd-logind[1436]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:22:14.307074 containerd[1451]: time="2024-08-05T22:22:14.307003448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:22:14.307199 containerd[1451]: time="2024-08-05T22:22:14.307116077Z" level=info msg="RemovePodSandbox \"d85f58ab1de4948c7982c45546b51d3ac07bc2a07eb0a8c7d6158e6bad07f391\" returns successfully" Aug 5 22:22:14.307981 systemd-logind[1436]: Removed session 19. Aug 5 22:22:19.317716 systemd[1]: Started sshd@19-10.0.0.155:22-10.0.0.1:38552.service - OpenSSH per-connection server daemon (10.0.0.1:38552). Aug 5 22:22:19.374931 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 38552 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:19.377293 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:19.386626 systemd-logind[1436]: New session 20 of user core. Aug 5 22:22:19.394702 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:22:19.525153 sshd[5259]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:19.529508 systemd-logind[1436]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:22:19.531109 systemd[1]: sshd@19-10.0.0.155:22-10.0.0.1:38552.service: Deactivated successfully. Aug 5 22:22:19.534422 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:22:19.537300 systemd-logind[1436]: Removed session 20. Aug 5 22:22:24.536773 systemd[1]: Started sshd@20-10.0.0.155:22-10.0.0.1:57142.service - OpenSSH per-connection server daemon (10.0.0.1:57142). Aug 5 22:22:24.575518 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 57142 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:24.577410 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:24.583082 systemd-logind[1436]: New session 21 of user core. Aug 5 22:22:24.588480 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:22:24.696876 sshd[5289]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:24.700550 systemd[1]: sshd@20-10.0.0.155:22-10.0.0.1:57142.service: Deactivated successfully. Aug 5 22:22:24.702485 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:22:24.703196 systemd-logind[1436]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:22:24.704225 systemd-logind[1436]: Removed session 21. Aug 5 22:22:27.448743 kubelet[2541]: E0805 22:22:27.448687 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:29.714978 systemd[1]: Started sshd@21-10.0.0.155:22-10.0.0.1:57144.service - OpenSSH per-connection server daemon (10.0.0.1:57144). Aug 5 22:22:29.754480 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 57144 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:29.756555 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:29.769447 systemd-logind[1436]: New session 22 of user core. Aug 5 22:22:29.780976 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:22:29.902556 sshd[5328]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:29.907536 systemd[1]: sshd@21-10.0.0.155:22-10.0.0.1:57144.service: Deactivated successfully. Aug 5 22:22:29.909993 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:22:29.911070 systemd-logind[1436]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:22:29.911894 systemd-logind[1436]: Removed session 22. Aug 5 22:22:33.465924 kubelet[2541]: E0805 22:22:33.465883 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:34.914979 systemd[1]: Started sshd@22-10.0.0.155:22-10.0.0.1:33746.service - OpenSSH per-connection server daemon (10.0.0.1:33746). Aug 5 22:22:34.956262 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 33746 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:34.957970 sshd[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:34.962603 systemd-logind[1436]: New session 23 of user core. Aug 5 22:22:34.968448 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:22:35.077317 sshd[5347]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:35.081669 systemd[1]: sshd@22-10.0.0.155:22-10.0.0.1:33746.service: Deactivated successfully. Aug 5 22:22:35.084253 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:22:35.085028 systemd-logind[1436]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:22:35.086204 systemd-logind[1436]: Removed session 23. Aug 5 22:22:36.465218 kubelet[2541]: E0805 22:22:36.465185 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:36.559781 kubelet[2541]: I0805 22:22:36.559727 2541 topology_manager.go:215] "Topology Admit Handler" podUID="2e95d222-5365-4ff9-b4ba-b18af3cf6a4a" podNamespace="calico-apiserver" podName="calico-apiserver-9f9b5c699-5kqpm" Aug 5 22:22:36.570103 systemd[1]: Created slice kubepods-besteffort-pod2e95d222_5365_4ff9_b4ba_b18af3cf6a4a.slice - libcontainer container kubepods-besteffort-pod2e95d222_5365_4ff9_b4ba_b18af3cf6a4a.slice. Aug 5 22:22:36.599339 kubelet[2541]: I0805 22:22:36.599254 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e95d222-5365-4ff9-b4ba-b18af3cf6a4a-calico-apiserver-certs\") pod \"calico-apiserver-9f9b5c699-5kqpm\" (UID: \"2e95d222-5365-4ff9-b4ba-b18af3cf6a4a\") " pod="calico-apiserver/calico-apiserver-9f9b5c699-5kqpm" Aug 5 22:22:36.599339 kubelet[2541]: I0805 22:22:36.599329 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d2nm\" (UniqueName: \"kubernetes.io/projected/2e95d222-5365-4ff9-b4ba-b18af3cf6a4a-kube-api-access-8d2nm\") pod \"calico-apiserver-9f9b5c699-5kqpm\" (UID: \"2e95d222-5365-4ff9-b4ba-b18af3cf6a4a\") " pod="calico-apiserver/calico-apiserver-9f9b5c699-5kqpm" Aug 5 22:22:36.700016 kubelet[2541]: E0805 22:22:36.699950 2541 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:22:36.700203 kubelet[2541]: E0805 22:22:36.700090 2541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e95d222-5365-4ff9-b4ba-b18af3cf6a4a-calico-apiserver-certs podName:2e95d222-5365-4ff9-b4ba-b18af3cf6a4a nodeName:}" failed. No retries permitted until 2024-08-05 22:22:37.200036277 +0000 UTC m=+83.826161852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2e95d222-5365-4ff9-b4ba-b18af3cf6a4a-calico-apiserver-certs") pod "calico-apiserver-9f9b5c699-5kqpm" (UID: "2e95d222-5365-4ff9-b4ba-b18af3cf6a4a") : secret "calico-apiserver-certs" not found Aug 5 22:22:37.202578 kubelet[2541]: E0805 22:22:37.202513 2541 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:22:37.202743 kubelet[2541]: E0805 22:22:37.202696 2541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e95d222-5365-4ff9-b4ba-b18af3cf6a4a-calico-apiserver-certs podName:2e95d222-5365-4ff9-b4ba-b18af3cf6a4a nodeName:}" failed. No retries permitted until 2024-08-05 22:22:38.202587873 +0000 UTC m=+84.828713449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2e95d222-5365-4ff9-b4ba-b18af3cf6a4a-calico-apiserver-certs") pod "calico-apiserver-9f9b5c699-5kqpm" (UID: "2e95d222-5365-4ff9-b4ba-b18af3cf6a4a") : secret "calico-apiserver-certs" not found Aug 5 22:22:38.374680 containerd[1451]: time="2024-08-05T22:22:38.374626763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9f9b5c699-5kqpm,Uid:2e95d222-5365-4ff9-b4ba-b18af3cf6a4a,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:22:38.502008 systemd-networkd[1392]: cali4a3b9271a06: Link UP Aug 5 22:22:38.502712 systemd-networkd[1392]: cali4a3b9271a06: Gained carrier Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.423 [INFO][5366] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0 calico-apiserver-9f9b5c699- calico-apiserver 2e95d222-5365-4ff9-b4ba-b18af3cf6a4a 1170 0 2024-08-05 22:22:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9f9b5c699 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9f9b5c699-5kqpm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4a3b9271a06 [] []}} ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.423 [INFO][5366] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.454 [INFO][5379] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" HandleID="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Workload="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.465 [INFO][5379] ipam_plugin.go 264: Auto assigning IP ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" HandleID="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Workload="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000508d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9f9b5c699-5kqpm", "timestamp":"2024-08-05 22:22:38.454798877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.465 [INFO][5379] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.465 [INFO][5379] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.465 [INFO][5379] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.467 [INFO][5379] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.471 [INFO][5379] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.477 [INFO][5379] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.478 [INFO][5379] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.482 [INFO][5379] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.482 [INFO][5379] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.486 [INFO][5379] ipam.go 1685: Creating new handle: k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042 Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.489 [INFO][5379] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.497 [INFO][5379] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.497 [INFO][5379] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" host="localhost" Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.497 [INFO][5379] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:22:38.516742 containerd[1451]: 2024-08-05 22:22:38.497 [INFO][5379] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" HandleID="k8s-pod-network.a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Workload="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" Aug 5 22:22:38.517526 containerd[1451]: 2024-08-05 22:22:38.499 [INFO][5366] k8s.go 386: Populated endpoint ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0", GenerateName:"calico-apiserver-9f9b5c699-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e95d222-5365-4ff9-b4ba-b18af3cf6a4a", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9f9b5c699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9f9b5c699-5kqpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3b9271a06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:38.517526 containerd[1451]: 2024-08-05 22:22:38.500 [INFO][5366] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" Aug 5 22:22:38.517526 containerd[1451]: 2024-08-05 22:22:38.500 [INFO][5366] dataplane_linux.go 68: Setting the host side veth name to cali4a3b9271a06 ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" Aug 5 22:22:38.517526 containerd[1451]: 2024-08-05 22:22:38.502 [INFO][5366] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" Aug 5 22:22:38.517526 containerd[1451]: 2024-08-05 22:22:38.503 [INFO][5366] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0", GenerateName:"calico-apiserver-9f9b5c699-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e95d222-5365-4ff9-b4ba-b18af3cf6a4a", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 22, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9f9b5c699", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042", Pod:"calico-apiserver-9f9b5c699-5kqpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4a3b9271a06", MAC:"26:46:bd:a1:c7:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:22:38.517526 containerd[1451]: 2024-08-05 22:22:38.513 [INFO][5366] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042" Namespace="calico-apiserver" Pod="calico-apiserver-9f9b5c699-5kqpm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9f9b5c699--5kqpm-eth0" Aug 5 22:22:38.539477 containerd[1451]: time="2024-08-05T22:22:38.538704951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:38.539739 containerd[1451]: time="2024-08-05T22:22:38.539440067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:38.539739 containerd[1451]: time="2024-08-05T22:22:38.539565414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:38.539739 containerd[1451]: time="2024-08-05T22:22:38.539575804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:38.561488 systemd[1]: Started cri-containerd-a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042.scope - libcontainer container a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042. Aug 5 22:22:38.576051 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:22:38.604191 containerd[1451]: time="2024-08-05T22:22:38.602139374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9f9b5c699-5kqpm,Uid:2e95d222-5365-4ff9-b4ba-b18af3cf6a4a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042\"" Aug 5 22:22:38.605999 containerd[1451]: time="2024-08-05T22:22:38.605962734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:22:40.012540 systemd-networkd[1392]: cali4a3b9271a06: Gained IPv6LL Aug 5 22:22:40.094567 systemd[1]: Started sshd@23-10.0.0.155:22-10.0.0.1:33754.service - OpenSSH per-connection server daemon (10.0.0.1:33754). Aug 5 22:22:40.239919 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 33754 ssh2: RSA SHA256:mmArdL9mbrPch5i1wtd6du+fSojJu3P2wwCXr0hVY1M Aug 5 22:22:40.243549 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:40.252994 systemd-logind[1436]: New session 24 of user core. Aug 5 22:22:40.259876 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:22:40.453220 sshd[5456]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:40.460895 systemd[1]: sshd@23-10.0.0.155:22-10.0.0.1:33754.service: Deactivated successfully. Aug 5 22:22:40.464079 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:22:40.465515 systemd-logind[1436]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:22:40.467731 systemd-logind[1436]: Removed session 24. Aug 5 22:22:41.123825 containerd[1451]: time="2024-08-05T22:22:41.123764742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:41.125488 containerd[1451]: time="2024-08-05T22:22:41.125329838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:22:41.129950 containerd[1451]: time="2024-08-05T22:22:41.127640992Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:41.136545 containerd[1451]: time="2024-08-05T22:22:41.133486837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:41.136545 containerd[1451]: time="2024-08-05T22:22:41.134543756Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.528410368s" Aug 5 22:22:41.136545 containerd[1451]: time="2024-08-05T22:22:41.134577190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:22:41.142936 containerd[1451]: time="2024-08-05T22:22:41.142327275Z" level=info msg="CreateContainer within sandbox \"a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:22:41.183322 containerd[1451]: time="2024-08-05T22:22:41.182390428Z" level=info msg="CreateContainer within sandbox \"a0a5ae1ee9cbddfe51e5b829e494a9accb74ec9ceacfeae0a64eb3ada3bcd042\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cf0f7ad248ca559b38ed7f0ac80e5a0607b1eb3d2f7d696a5aae97d268e1320d\"" Aug 5 22:22:41.185086 containerd[1451]: time="2024-08-05T22:22:41.184964291Z" level=info msg="StartContainer for \"cf0f7ad248ca559b38ed7f0ac80e5a0607b1eb3d2f7d696a5aae97d268e1320d\"" Aug 5 22:22:41.247016 systemd[1]: Started cri-containerd-cf0f7ad248ca559b38ed7f0ac80e5a0607b1eb3d2f7d696a5aae97d268e1320d.scope - libcontainer container cf0f7ad248ca559b38ed7f0ac80e5a0607b1eb3d2f7d696a5aae97d268e1320d. Aug 5 22:22:41.489711 containerd[1451]: time="2024-08-05T22:22:41.489358153Z" level=info msg="StartContainer for \"cf0f7ad248ca559b38ed7f0ac80e5a0607b1eb3d2f7d696a5aae97d268e1320d\" returns successfully" Aug 5 22:22:42.254896 kubelet[2541]: I0805 22:22:42.254379 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9f9b5c699-5kqpm" podStartSLOduration=3.722719101 podStartE2EDuration="6.254329609s" podCreationTimestamp="2024-08-05 22:22:36 +0000 UTC" firstStartedPulling="2024-08-05 22:22:38.60446008 +0000 UTC m=+85.230585666" lastFinishedPulling="2024-08-05 22:22:41.136070589 +0000 UTC m=+87.762196174" observedRunningTime="2024-08-05 22:22:41.778286279 +0000 UTC m=+88.404411854" watchObservedRunningTime="2024-08-05 22:22:42.254329609 +0000 UTC m=+88.880455184"