Jul 2 00:18:35.202382 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:18:35.202409 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:18:35.202423 kernel: BIOS-provided physical RAM map: Jul 2 00:18:35.202432 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 00:18:35.202441 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 00:18:35.202449 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 00:18:35.202460 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 00:18:35.202468 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 00:18:35.202477 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 00:18:35.202499 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 00:18:35.202511 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 00:18:35.202520 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 2 00:18:35.202528 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 2 00:18:35.202537 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 2 00:18:35.202548 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 00:18:35.202560 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 00:18:35.202569 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 00:18:35.202579 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 00:18:35.202588 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 00:18:35.202597 kernel: NX (Execute Disable) protection: active Jul 2 00:18:35.202606 kernel: APIC: Static calls initialized Jul 2 00:18:35.202616 kernel: efi: EFI v2.7 by EDK II Jul 2 00:18:35.202625 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b5ef418 Jul 2 00:18:35.202635 kernel: SMBIOS 2.8 present. Jul 2 00:18:35.202644 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 00:18:35.202653 kernel: Hypervisor detected: KVM Jul 2 00:18:35.202672 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:18:35.202685 kernel: kvm-clock: using sched offset of 5521017488 cycles Jul 2 00:18:35.202695 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:18:35.202705 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:18:35.202714 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:18:35.202724 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:18:35.202734 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 00:18:35.202760 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 2 00:18:35.202770 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:18:35.202783 kernel: Using GB pages for direct mapping Jul 2 00:18:35.202792 kernel: Secure boot disabled Jul 2 00:18:35.202802 kernel: ACPI: Early table checksum verification disabled Jul 2 00:18:35.202812 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 00:18:35.202822 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:18:35.202836 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:18:35.202846 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:18:35.202859 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 00:18:35.202869 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:18:35.202880 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:18:35.202890 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:18:35.202900 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 00:18:35.202910 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 00:18:35.202928 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 00:18:35.202938 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 00:18:35.202950 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 00:18:35.202960 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 00:18:35.202971 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 00:18:35.202983 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 00:18:35.202994 kernel: No NUMA configuration found Jul 2 00:18:35.203006 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 00:18:35.203017 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 00:18:35.203029 kernel: Zone ranges: Jul 2 00:18:35.203040 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:18:35.203052 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 00:18:35.203063 kernel: Normal empty Jul 2 00:18:35.203073 kernel: Movable zone start for each node Jul 2 00:18:35.203083 kernel: Early memory node ranges Jul 2 00:18:35.203093 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 00:18:35.203103 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 00:18:35.203113 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 00:18:35.203123 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 00:18:35.203134 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 00:18:35.203146 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 00:18:35.203167 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 00:18:35.203178 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:18:35.203188 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 00:18:35.203198 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 00:18:35.203208 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:18:35.203218 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 00:18:35.203229 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 00:18:35.203239 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 00:18:35.203249 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:18:35.203263 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:18:35.203273 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:18:35.203283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:18:35.203293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:18:35.203304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:18:35.203314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:18:35.203324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:18:35.203334 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:18:35.203344 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:18:35.203357 kernel: TSC deadline timer available Jul 2 00:18:35.203367 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:18:35.203377 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:18:35.203387 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:18:35.203397 kernel: kvm-guest: setup PV sched yield Jul 2 00:18:35.203408 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 00:18:35.203418 kernel: Booting paravirtualized kernel on KVM Jul 2 00:18:35.203428 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:18:35.203439 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:18:35.203451 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:18:35.203462 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:18:35.203472 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:18:35.203493 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:18:35.203504 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:18:35.203515 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:18:35.203526 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:18:35.203536 kernel: random: crng init done Jul 2 00:18:35.203550 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:18:35.203560 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:18:35.203570 kernel: Fallback order for Node 0: 0 Jul 2 00:18:35.203580 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 00:18:35.203590 kernel: Policy zone: DMA32 Jul 2 00:18:35.203601 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:18:35.203611 kernel: Memory: 2395520K/2567000K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 171220K reserved, 0K cma-reserved) Jul 2 00:18:35.203622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:18:35.203632 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:18:35.203645 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:18:35.203655 kernel: Dynamic Preempt: voluntary Jul 2 00:18:35.203675 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:18:35.203687 kernel: rcu: RCU event tracing is enabled. Jul 2 00:18:35.203697 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:18:35.203718 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:18:35.203731 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:18:35.203742 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:18:35.203753 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:18:35.203764 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:18:35.203774 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:18:35.203785 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:18:35.203799 kernel: Console: colour dummy device 80x25 Jul 2 00:18:35.203809 kernel: printk: console [ttyS0] enabled Jul 2 00:18:35.203820 kernel: ACPI: Core revision 20230628 Jul 2 00:18:35.203831 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:18:35.203842 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:18:35.203856 kernel: x2apic enabled Jul 2 00:18:35.203866 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:18:35.203877 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:18:35.203888 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:18:35.203899 kernel: kvm-guest: setup PV IPIs Jul 2 00:18:35.203909 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:18:35.203920 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:18:35.203930 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:18:35.203941 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:18:35.203954 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:18:35.203965 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:18:35.203976 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:18:35.203986 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:18:35.203997 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:18:35.204008 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:18:35.204019 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:18:35.204029 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:18:35.204040 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:18:35.204054 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:18:35.204065 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:18:35.204076 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:18:35.204087 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:18:35.204098 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:18:35.204108 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:18:35.204119 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:18:35.204130 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:18:35.204143 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:18:35.204154 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:18:35.204164 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:18:35.204174 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:18:35.204184 kernel: SELinux: Initializing. Jul 2 00:18:35.204194 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:18:35.204205 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:18:35.204216 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:18:35.204226 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:18:35.204239 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:18:35.204247 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:18:35.204254 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:18:35.204262 kernel: ... version: 0 Jul 2 00:18:35.204269 kernel: ... bit width: 48 Jul 2 00:18:35.204277 kernel: ... generic registers: 6 Jul 2 00:18:35.204284 kernel: ... value mask: 0000ffffffffffff Jul 2 00:18:35.204292 kernel: ... max period: 00007fffffffffff Jul 2 00:18:35.204299 kernel: ... fixed-purpose events: 0 Jul 2 00:18:35.204310 kernel: ... event mask: 000000000000003f Jul 2 00:18:35.204317 kernel: signal: max sigframe size: 1776 Jul 2 00:18:35.204325 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:18:35.204333 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:18:35.204340 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:18:35.204348 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:18:35.204355 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:18:35.204363 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:18:35.204370 kernel: smpboot: Max logical packages: 1 Jul 2 00:18:35.204385 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:18:35.204403 kernel: devtmpfs: initialized Jul 2 00:18:35.204425 kernel: x86/mm: Memory block size: 128MB Jul 2 00:18:35.204441 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 00:18:35.204463 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 00:18:35.204471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 00:18:35.204479 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 00:18:35.204497 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 00:18:35.204505 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:18:35.204516 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:18:35.204524 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:18:35.204532 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:18:35.204540 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:18:35.204547 kernel: audit: type=2000 audit(1719879512.909:1): state=initialized audit_enabled=0 res=1 Jul 2 00:18:35.204555 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:18:35.204563 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:18:35.204570 kernel: cpuidle: using governor menu Jul 2 00:18:35.204578 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:18:35.204592 kernel: dca service started, version 1.12.1 Jul 2 00:18:35.204600 kernel: PCI: Using configuration type 1 for base access Jul 2 00:18:35.204607 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:18:35.204615 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:18:35.204623 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:18:35.204630 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:18:35.204638 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:18:35.204645 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:18:35.204653 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:18:35.204672 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:18:35.204680 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:18:35.204687 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:18:35.204695 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:18:35.204702 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:18:35.204710 kernel: ACPI: Interpreter enabled Jul 2 00:18:35.204717 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:18:35.204725 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:18:35.204733 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:18:35.204743 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:18:35.204750 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:18:35.204758 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:18:35.204956 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:18:35.204969 kernel: acpiphp: Slot [3] registered Jul 2 00:18:35.204976 kernel: acpiphp: Slot [4] registered Jul 2 00:18:35.204984 kernel: acpiphp: Slot [5] registered Jul 2 00:18:35.204991 kernel: acpiphp: Slot [6] registered Jul 2 00:18:35.205002 kernel: acpiphp: Slot [7] registered Jul 2 00:18:35.205009 kernel: acpiphp: Slot [8] registered Jul 2 00:18:35.205017 kernel: acpiphp: Slot [9] registered Jul 2 00:18:35.205024 kernel: acpiphp: Slot [10] registered Jul 2 00:18:35.205032 kernel: acpiphp: Slot [11] registered Jul 2 00:18:35.205039 kernel: acpiphp: Slot [12] registered Jul 2 00:18:35.205047 kernel: acpiphp: Slot [13] registered Jul 2 00:18:35.205054 kernel: acpiphp: Slot [14] registered Jul 2 00:18:35.205062 kernel: acpiphp: Slot [15] registered Jul 2 00:18:35.205071 kernel: acpiphp: Slot [16] registered Jul 2 00:18:35.205079 kernel: acpiphp: Slot [17] registered Jul 2 00:18:35.205086 kernel: acpiphp: Slot [18] registered Jul 2 00:18:35.205093 kernel: acpiphp: Slot [19] registered Jul 2 00:18:35.205101 kernel: acpiphp: Slot [20] registered Jul 2 00:18:35.205108 kernel: acpiphp: Slot [21] registered Jul 2 00:18:35.205116 kernel: acpiphp: Slot [22] registered Jul 2 00:18:35.205123 kernel: acpiphp: Slot [23] registered Jul 2 00:18:35.205130 kernel: acpiphp: Slot [24] registered Jul 2 00:18:35.205138 kernel: acpiphp: Slot [25] registered Jul 2 00:18:35.205148 kernel: acpiphp: Slot [26] registered Jul 2 00:18:35.205155 kernel: acpiphp: Slot [27] registered Jul 2 00:18:35.205162 kernel: acpiphp: Slot [28] registered Jul 2 00:18:35.205170 kernel: acpiphp: Slot [29] registered Jul 2 00:18:35.205177 kernel: acpiphp: Slot [30] registered Jul 2 00:18:35.205185 kernel: acpiphp: Slot [31] registered Jul 2 00:18:35.205192 kernel: PCI host bridge to bus 0000:00 Jul 2 00:18:35.205356 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:18:35.205517 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:18:35.205673 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:18:35.205828 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:18:35.205966 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 00:18:35.206103 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:18:35.206313 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:18:35.206495 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:18:35.206679 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:18:35.206870 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:18:35.207035 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:18:35.207193 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:18:35.207348 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:18:35.207761 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:18:35.207933 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:18:35.208087 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:18:35.208240 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:18:35.208409 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:18:35.208574 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 00:18:35.208731 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 00:18:35.208878 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 00:18:35.209024 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 00:18:35.209165 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:18:35.209321 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:18:35.209464 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:18:35.209640 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 00:18:35.209792 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 00:18:35.209945 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:18:35.210095 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:18:35.210241 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 00:18:35.210386 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 00:18:35.210561 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:18:35.210723 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:18:35.210868 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 00:18:35.211010 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 00:18:35.211160 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 00:18:35.211174 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:18:35.211185 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:18:35.211195 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:18:35.211205 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:18:35.211216 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:18:35.211226 kernel: iommu: Default domain type: Translated Jul 2 00:18:35.211236 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:18:35.211246 kernel: efivars: Registered efivars operations Jul 2 00:18:35.211260 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:18:35.211271 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:18:35.211282 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 00:18:35.211291 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 00:18:35.211301 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 00:18:35.211311 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 00:18:35.211454 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:18:35.211613 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:18:35.211769 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:18:35.211789 kernel: vgaarb: loaded Jul 2 00:18:35.211800 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:18:35.211810 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:18:35.211820 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:18:35.211830 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:18:35.211841 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:18:35.211852 kernel: pnp: PnP ACPI init Jul 2 00:18:35.212050 kernel: pnp 00:02: [dma 2] Jul 2 00:18:35.212066 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:18:35.212080 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:18:35.212091 kernel: NET: Registered PF_INET protocol family Jul 2 00:18:35.212101 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:18:35.212112 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:18:35.212123 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:18:35.212133 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:18:35.212144 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:18:35.212155 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:18:35.212166 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:18:35.212180 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:18:35.212190 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:18:35.212200 kernel: NET: Registered PF_XDP protocol family Jul 2 00:18:35.212349 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 00:18:35.212526 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 00:18:35.212674 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:18:35.212812 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:18:35.212944 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:18:35.213084 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:18:35.213218 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 00:18:35.213368 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:18:35.213563 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:18:35.213579 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:18:35.213589 kernel: Initialise system trusted keyrings Jul 2 00:18:35.213600 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:18:35.213610 kernel: Key type asymmetric registered Jul 2 00:18:35.213625 kernel: Asymmetric key parser 'x509' registered Jul 2 00:18:35.213635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:18:35.213646 kernel: io scheduler mq-deadline registered Jul 2 00:18:35.213656 kernel: io scheduler kyber registered Jul 2 00:18:35.213678 kernel: io scheduler bfq registered Jul 2 00:18:35.213688 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:18:35.213700 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:18:35.213710 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:18:35.213720 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:18:35.213734 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:18:35.213744 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:18:35.213755 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:18:35.213785 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:18:35.213798 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:18:35.213953 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:18:35.214090 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:18:35.214104 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:18:35.214119 kernel: hpet: Lost 1 RTC interrupts Jul 2 00:18:35.214254 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:18:34 UTC (1719879514) Jul 2 00:18:35.214391 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:18:35.214406 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:18:35.214417 kernel: efifb: probing for efifb Jul 2 00:18:35.214428 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 2 00:18:35.214439 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 2 00:18:35.214450 kernel: efifb: scrolling: redraw Jul 2 00:18:35.214461 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 2 00:18:35.214476 kernel: Console: switching to colour frame buffer device 100x37 Jul 2 00:18:35.214516 kernel: fb0: EFI VGA frame buffer device Jul 2 00:18:35.214527 kernel: pstore: Using crash dump compression: deflate Jul 2 00:18:35.214538 kernel: pstore: Registered efi_pstore as persistent store backend Jul 2 00:18:35.214549 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:18:35.214560 kernel: Segment Routing with IPv6 Jul 2 00:18:35.214571 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:18:35.214582 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:18:35.214593 kernel: Key type dns_resolver registered Jul 2 00:18:35.214607 kernel: IPI shorthand broadcast: enabled Jul 2 00:18:35.214619 kernel: sched_clock: Marking stable (1507003912, 131879545)->(1773746165, -134862708) Jul 2 00:18:35.214633 kernel: registered taskstats version 1 Jul 2 00:18:35.214644 kernel: Loading compiled-in X.509 certificates Jul 2 00:18:35.214655 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:18:35.214684 kernel: Key type .fscrypt registered Jul 2 00:18:35.214695 kernel: Key type fscrypt-provisioning registered Jul 2 00:18:35.214706 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:18:35.214717 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:18:35.214728 kernel: ima: No architecture policies found Jul 2 00:18:35.214739 kernel: clk: Disabling unused clocks Jul 2 00:18:35.214750 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:18:35.214761 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:18:35.214772 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:18:35.214788 kernel: Run /init as init process Jul 2 00:18:35.214799 kernel: with arguments: Jul 2 00:18:35.214810 kernel: /init Jul 2 00:18:35.214821 kernel: with environment: Jul 2 00:18:35.214831 kernel: HOME=/ Jul 2 00:18:35.214842 kernel: TERM=linux Jul 2 00:18:35.214853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:18:35.214867 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:18:35.214884 systemd[1]: Detected virtualization kvm. Jul 2 00:18:35.214896 systemd[1]: Detected architecture x86-64. Jul 2 00:18:35.214907 systemd[1]: Running in initrd. Jul 2 00:18:35.214918 systemd[1]: No hostname configured, using default hostname. Jul 2 00:18:35.214932 systemd[1]: Hostname set to . Jul 2 00:18:35.214944 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:18:35.214958 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:18:35.214971 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:18:35.214988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:18:35.215000 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:18:35.215012 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:18:35.215024 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:18:35.215036 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:18:35.215050 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:18:35.215065 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:18:35.215077 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:18:35.215089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:18:35.215101 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:18:35.215113 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:18:35.215124 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:18:35.215136 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:18:35.215148 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:18:35.215160 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:18:35.215175 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:18:35.215187 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:18:35.215199 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:18:35.215211 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:18:35.215222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:18:35.215234 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:18:35.215246 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:18:35.215257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:18:35.215273 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:18:35.215284 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:18:35.215296 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:18:35.215308 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:18:35.215320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:18:35.215332 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:18:35.215343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:18:35.215383 systemd-journald[193]: Collecting audit messages is disabled. Jul 2 00:18:35.215413 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:18:35.215426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:18:35.215441 systemd-journald[193]: Journal started Jul 2 00:18:35.215466 systemd-journald[193]: Runtime Journal (/run/log/journal/f7ca357a1f0d4e68927f3aefa9ebb3b2) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:18:35.201750 systemd-modules-load[194]: Inserted module 'overlay' Jul 2 00:18:35.219496 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:18:35.238628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:18:35.243549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:18:35.247965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:18:35.287248 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:18:35.282534 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:18:35.289876 kernel: Bridge firewalling registered Jul 2 00:18:35.283115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:18:35.285641 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:18:35.289604 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 2 00:18:35.291250 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:18:35.294960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:18:35.303653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:18:35.304070 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:18:35.442747 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:18:35.504168 dracut-cmdline[225]: dracut-dracut-053 Jul 2 00:18:35.504168 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:18:35.444469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:18:35.508817 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:18:35.547061 systemd-resolved[291]: Positive Trust Anchors: Jul 2 00:18:35.547079 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:18:35.547109 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:18:35.549726 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 2 00:18:35.550844 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:18:35.557930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:18:35.613524 kernel: SCSI subsystem initialized Jul 2 00:18:35.627524 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:18:35.676531 kernel: iscsi: registered transport (tcp) Jul 2 00:18:35.719520 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:18:35.719610 kernel: QLogic iSCSI HBA Driver Jul 2 00:18:35.771961 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:18:35.816864 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:18:35.918566 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:18:35.918679 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:18:35.918698 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:18:36.005548 kernel: raid6: avx2x4 gen() 28560 MB/s Jul 2 00:18:36.022534 kernel: raid6: avx2x2 gen() 26438 MB/s Jul 2 00:18:36.100692 kernel: raid6: avx2x1 gen() 23619 MB/s Jul 2 00:18:36.100783 kernel: raid6: using algorithm avx2x4 gen() 28560 MB/s Jul 2 00:18:36.132030 kernel: raid6: .... xor() 5430 MB/s, rmw enabled Jul 2 00:18:36.132106 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:18:36.168523 kernel: xor: automatically using best checksumming function avx Jul 2 00:18:36.378534 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:18:36.394005 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:18:36.445791 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:18:36.459993 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jul 2 00:18:36.464775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:18:36.474255 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:18:36.495270 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jul 2 00:18:36.531880 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:18:36.563723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:18:36.634120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:18:36.663670 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:18:36.688325 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:18:36.692255 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:18:36.698884 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:18:36.709722 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:18:36.709923 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:18:36.709951 kernel: GPT:9289727 != 19775487 Jul 2 00:18:36.709966 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:18:36.709980 kernel: GPT:9289727 != 19775487 Jul 2 00:18:36.709994 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:18:36.710009 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:18:36.698326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:18:36.698405 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:18:36.743743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:18:36.770844 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:18:36.772507 kernel: libata version 3.00 loaded. Jul 2 00:18:36.776876 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:18:36.780585 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:18:36.817165 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:18:36.817187 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (464) Jul 2 00:18:36.817201 kernel: scsi host0: ata_piix Jul 2 00:18:36.817392 kernel: scsi host1: ata_piix Jul 2 00:18:36.817585 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:18:36.817600 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:18:36.802198 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:18:36.861812 kernel: AES CTR mode by8 optimization enabled Jul 2 00:18:36.861842 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jul 2 00:18:36.871867 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:18:36.885026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:18:36.886874 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:18:36.897027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:18:36.933783 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:18:36.977887 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:18:37.013189 kernel: ata2: found unknown device (class 0) Jul 2 00:18:37.013225 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:18:36.977988 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:18:36.981076 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:18:37.020572 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:18:37.013181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:18:37.013281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:18:37.020705 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:18:37.023662 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:18:37.040804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:18:37.088570 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:18:37.114966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:18:37.150652 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:18:37.169525 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:18:37.169562 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:18:37.224318 disk-uuid[537]: Primary Header is updated. Jul 2 00:18:37.224318 disk-uuid[537]: Secondary Entries is updated. Jul 2 00:18:37.224318 disk-uuid[537]: Secondary Header is updated. Jul 2 00:18:37.245518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:18:37.249499 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:18:37.253504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:18:38.259545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:18:38.259652 disk-uuid[564]: The operation has completed successfully. Jul 2 00:18:38.294860 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:18:38.295042 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:18:38.331808 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:18:38.335432 sh[581]: Success Jul 2 00:18:38.370529 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:18:38.411105 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:18:38.422378 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:18:38.425346 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:18:38.463017 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:18:38.463078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:18:38.463089 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:18:38.464359 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:18:38.465322 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:18:38.474756 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:18:38.476885 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:18:38.484826 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:18:38.501321 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:18:38.509952 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:18:38.510012 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:18:38.510042 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:18:38.535527 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:18:38.547415 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:18:38.565520 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:18:38.636381 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:18:38.646850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:18:38.706617 systemd-networkd[759]: lo: Link UP Jul 2 00:18:38.706628 systemd-networkd[759]: lo: Gained carrier Jul 2 00:18:38.708675 systemd-networkd[759]: Enumeration completed Jul 2 00:18:38.709180 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:18:38.709185 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:18:38.710677 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:18:38.711142 systemd-networkd[759]: eth0: Link UP Jul 2 00:18:38.711147 systemd-networkd[759]: eth0: Gained carrier Jul 2 00:18:38.711156 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:18:38.729511 systemd[1]: Reached target network.target - Network. Jul 2 00:18:38.734831 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:18:38.757727 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:18:38.781537 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:18:38.842415 ignition[763]: Ignition 2.18.0 Jul 2 00:18:38.842428 ignition[763]: Stage: fetch-offline Jul 2 00:18:38.842471 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:18:38.842494 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:18:38.842723 ignition[763]: parsed url from cmdline: "" Jul 2 00:18:38.842728 ignition[763]: no config URL provided Jul 2 00:18:38.842733 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:18:38.842746 ignition[763]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:18:38.842779 ignition[763]: op(1): [started] loading QEMU firmware config module Jul 2 00:18:38.842785 ignition[763]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:18:38.894832 ignition[763]: op(1): [finished] loading QEMU firmware config module Jul 2 00:18:38.941846 ignition[763]: parsing config with SHA512: cc3db509e2b2ccbec0f60a9be03eb4bf7251a9a91710353777cd9e9b7872794220e151e0b9192c77e6cda5b909d4d3d91d4e4448a71b7a48b10a1b3007feb91c Jul 2 00:18:38.945596 unknown[763]: fetched base config from "system" Jul 2 00:18:38.945613 unknown[763]: fetched user config from "qemu" Jul 2 00:18:38.947570 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.84 Jul 2 00:18:38.947598 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 2 00:18:38.950567 ignition[763]: fetch-offline: fetch-offline passed Jul 2 00:18:38.950778 ignition[763]: Ignition finished successfully Jul 2 00:18:38.954442 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:18:38.957666 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:18:38.970279 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:18:38.988796 ignition[778]: Ignition 2.18.0 Jul 2 00:18:38.988812 ignition[778]: Stage: kargs Jul 2 00:18:38.989003 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:18:38.989015 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:18:38.989853 ignition[778]: kargs: kargs passed Jul 2 00:18:38.989906 ignition[778]: Ignition finished successfully Jul 2 00:18:38.995978 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:18:39.009736 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:18:39.027128 ignition[787]: Ignition 2.18.0 Jul 2 00:18:39.027143 ignition[787]: Stage: disks Jul 2 00:18:39.027340 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:18:39.027352 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:18:39.028211 ignition[787]: disks: disks passed Jul 2 00:18:39.028270 ignition[787]: Ignition finished successfully Jul 2 00:18:39.053050 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:18:39.056169 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:18:39.058903 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:18:39.061846 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:18:39.064154 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:18:39.066524 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:18:39.080700 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:18:39.127602 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:18:39.318226 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:18:39.338686 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:18:39.562524 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:18:39.562979 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:18:39.563734 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:18:39.582677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:18:39.619984 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:18:39.627246 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Jul 2 00:18:39.627276 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:18:39.627291 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:18:39.627305 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:18:39.626728 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:18:39.626812 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:18:39.626855 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:18:39.638044 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:18:39.631946 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:18:39.640974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:18:39.654847 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:18:39.700523 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:18:39.745252 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:18:39.805046 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:18:39.809843 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:18:39.944973 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:18:39.975750 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:18:39.980130 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:18:39.988261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:18:39.989653 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:18:40.016008 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:18:40.067827 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:18:40.071120 ignition[922]: INFO : Ignition 2.18.0 Jul 2 00:18:40.071120 ignition[922]: INFO : Stage: mount Jul 2 00:18:40.071120 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:18:40.071120 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:18:40.071120 ignition[922]: INFO : mount: mount passed Jul 2 00:18:40.071120 ignition[922]: INFO : Ignition finished successfully Jul 2 00:18:40.071478 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:18:40.146787 systemd-networkd[759]: eth0: Gained IPv6LL Jul 2 00:18:40.575687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:18:40.713578 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) Jul 2 00:18:40.716338 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:18:40.716376 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:18:40.716391 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:18:40.799548 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:18:40.801873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:18:40.876285 ignition[950]: INFO : Ignition 2.18.0 Jul 2 00:18:40.876285 ignition[950]: INFO : Stage: files Jul 2 00:18:40.919053 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:18:40.919053 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:18:40.919053 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:18:40.919053 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:18:40.919053 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:18:40.928681 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:18:40.928681 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:18:40.928681 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:18:40.928681 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:18:40.928681 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:18:40.921876 unknown[950]: wrote ssh authorized keys file for user: core Jul 2 00:18:40.951969 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:18:41.098693 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:18:41.098693 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:18:41.105386 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:18:41.498389 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:18:42.102860 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:18:42.102860 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:18:42.128769 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:18:42.130977 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:18:42.130977 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:18:42.130977 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 00:18:42.130977 ignition[950]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:18:42.137689 ignition[950]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:18:42.137689 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 00:18:42.137689 ignition[950]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:18:42.167105 ignition[950]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:18:42.187002 ignition[950]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:18:42.188797 ignition[950]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:18:42.188797 ignition[950]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:18:42.188797 ignition[950]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:18:42.188797 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:18:42.188797 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:18:42.188797 ignition[950]: INFO : files: files passed Jul 2 00:18:42.188797 ignition[950]: INFO : Ignition finished successfully Jul 2 00:18:42.191264 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:18:42.204780 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:18:42.236781 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:18:42.239035 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:18:42.239176 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:18:42.248987 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:18:42.251732 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:18:42.251732 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:18:42.255923 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:18:42.254711 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:18:42.258225 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:18:42.269742 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:18:42.305521 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:18:42.305659 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:18:42.311423 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:18:42.314030 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:18:42.316645 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:18:42.317866 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:18:42.339945 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:18:42.353775 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:18:42.366164 systemd[1]: Stopped target network.target - Network. Jul 2 00:18:42.416787 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:18:42.419617 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:18:42.422853 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:18:42.425529 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:18:42.425727 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:18:42.428605 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:18:42.500173 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:18:42.503381 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:18:42.506541 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:18:42.509783 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:18:42.513200 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:18:42.516944 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:18:42.519657 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:18:42.561303 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:18:42.563923 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:18:42.566327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:18:42.566509 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:18:42.569433 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:18:42.571988 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:18:42.574376 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:18:42.574559 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:18:42.576932 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:18:42.577110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:18:42.579813 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:18:42.579964 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:18:42.581987 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:18:42.584358 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:18:42.587578 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:18:42.606438 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:18:42.611030 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:18:42.613737 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:18:42.613881 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:18:42.616395 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:18:42.616520 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:18:42.650696 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:18:42.650859 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:18:42.653659 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:18:42.653785 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:18:42.701971 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:18:42.707256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:18:42.710421 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:18:42.712317 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:18:42.714727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:18:42.726373 ignition[1005]: INFO : Ignition 2.18.0 Jul 2 00:18:42.726373 ignition[1005]: INFO : Stage: umount Jul 2 00:18:42.726373 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:18:42.726373 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:18:42.726373 ignition[1005]: INFO : umount: umount passed Jul 2 00:18:42.726373 ignition[1005]: INFO : Ignition finished successfully Jul 2 00:18:42.715822 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:18:42.718034 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:18:42.718197 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:18:42.720761 systemd-networkd[759]: eth0: DHCPv6 lease lost Jul 2 00:18:42.728309 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:18:42.728725 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:18:42.754035 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:18:42.754315 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:18:42.757911 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:18:42.758071 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:18:42.764060 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:18:42.764778 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:18:42.764919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:18:42.769468 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:18:42.769637 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:18:42.771704 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:18:42.771766 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:18:42.773663 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:18:42.773753 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:18:42.821168 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:18:42.821271 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:18:42.823673 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:18:42.823735 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:18:42.826020 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:18:42.826078 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:18:42.828331 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:18:42.828384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:18:42.845755 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:18:42.893162 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:18:42.893276 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:18:42.896726 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:18:42.896812 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:18:42.899393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:18:42.899536 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:18:42.952053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:18:42.952140 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:18:42.953814 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:18:43.005755 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:18:43.005922 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:18:43.012759 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:18:43.013036 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:18:43.039861 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:18:43.039942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:18:43.042105 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:18:43.042159 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:18:43.044716 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:18:43.044802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:18:43.047216 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:18:43.047274 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:18:43.073078 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:18:43.073171 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:18:43.087794 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:18:43.112627 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:18:43.112743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:18:43.115269 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:18:43.115333 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:18:43.117914 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:18:43.117967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:18:43.120507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:18:43.120562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:18:43.123352 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:18:43.123517 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:18:43.151987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:18:43.184835 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:18:43.195857 systemd[1]: Switching root. Jul 2 00:18:43.264940 systemd-journald[193]: Journal stopped Jul 2 00:18:44.831564 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 2 00:18:44.831644 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:18:44.831666 kernel: SELinux: policy capability open_perms=1 Jul 2 00:18:44.831687 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:18:44.831701 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:18:44.831717 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:18:44.831732 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:18:44.831751 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:18:44.831765 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:18:44.831783 kernel: audit: type=1403 audit(1719879523.783:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:18:44.831806 systemd[1]: Successfully loaded SELinux policy in 63.813ms. Jul 2 00:18:44.831825 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.679ms. Jul 2 00:18:44.831842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:18:44.831858 systemd[1]: Detected virtualization kvm. Jul 2 00:18:44.831873 systemd[1]: Detected architecture x86-64. Jul 2 00:18:44.831889 systemd[1]: Detected first boot. Jul 2 00:18:44.831914 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:18:44.831944 zram_generator::config[1050]: No configuration found. Jul 2 00:18:44.831963 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:18:44.831976 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:18:44.831988 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:18:44.832000 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:18:44.832017 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:18:44.832043 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:18:44.832064 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:18:44.832079 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:18:44.832099 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:18:44.832115 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:18:44.832131 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:18:44.832144 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:18:44.832165 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:18:44.832188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:18:44.832205 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:18:44.832221 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:18:44.832234 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:18:44.832251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:18:44.832268 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:18:44.832281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:18:44.832293 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:18:44.832305 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:18:44.832317 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:18:44.832330 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:18:44.832345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:18:44.832357 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:18:44.832371 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:18:44.832383 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:18:44.832395 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:18:44.832407 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:18:44.832419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:18:44.832432 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:18:44.832456 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:18:44.832468 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:18:44.832495 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:18:44.832507 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:18:44.832519 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:18:44.832531 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:44.832543 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:18:44.832555 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:18:44.832567 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:18:44.832580 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:18:44.832595 systemd[1]: Reached target machines.target - Containers. Jul 2 00:18:44.832607 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:18:44.832619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:18:44.832631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:18:44.832643 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:18:44.832656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:18:44.832668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:18:44.832680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:18:44.832692 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:18:44.832707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:18:44.832719 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:18:44.832730 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:18:44.832743 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:18:44.832758 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:18:44.832774 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:18:44.832786 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:18:44.832798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:18:44.832810 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:18:44.832825 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:18:44.832840 kernel: loop: module loaded Jul 2 00:18:44.832853 kernel: fuse: init (API version 7.39) Jul 2 00:18:44.832865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:18:44.832880 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:18:44.832897 systemd[1]: Stopped verity-setup.service. Jul 2 00:18:44.832919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:44.832948 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:18:44.832973 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:18:44.832986 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:18:44.832998 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:18:44.833010 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:18:44.833022 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:18:44.833059 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:18:44.833097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:18:44.833129 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:18:44.833151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:18:44.833170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:18:44.833198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:18:44.833221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:18:44.833240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:18:44.833261 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:18:44.833291 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:18:44.833311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:18:44.833333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:18:44.833364 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:18:44.833421 systemd-journald[1119]: Collecting audit messages is disabled. Jul 2 00:18:44.833495 systemd-journald[1119]: Journal started Jul 2 00:18:44.833526 systemd-journald[1119]: Runtime Journal (/run/log/journal/f7ca357a1f0d4e68927f3aefa9ebb3b2) is 6.0M, max 48.3M, 42.3M free. Jul 2 00:18:44.491574 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:18:44.509982 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:18:44.510529 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:18:44.510935 systemd[1]: systemd-journald.service: Consumed 2.089s CPU time. Jul 2 00:18:44.835504 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:18:44.836686 kernel: ACPI: bus type drm_connector registered Jul 2 00:18:44.840528 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:18:44.842849 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:18:44.844871 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:18:44.845127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:18:44.860619 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:18:44.873825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:18:44.877470 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:18:44.878899 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:18:44.878944 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:18:44.881678 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:18:44.886236 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:18:44.892988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:18:44.895730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:18:44.898301 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:18:44.902474 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:18:44.903976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:18:44.906318 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:18:44.907832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:18:44.913452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:18:44.926895 systemd-journald[1119]: Time spent on flushing to /var/log/journal/f7ca357a1f0d4e68927f3aefa9ebb3b2 is 44.554ms for 991 entries. Jul 2 00:18:44.926895 systemd-journald[1119]: System Journal (/var/log/journal/f7ca357a1f0d4e68927f3aefa9ebb3b2) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:18:44.999026 systemd-journald[1119]: Received client request to flush runtime journal. Jul 2 00:18:44.999077 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:18:44.999097 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:18:44.922687 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:18:44.925718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:18:44.930363 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:18:44.933602 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:18:44.937611 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:18:44.942421 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:18:44.948313 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:18:44.958421 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:18:44.991331 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:18:44.996586 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:18:45.003009 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:18:45.027783 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:18:45.030142 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:18:45.036515 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:18:45.046474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:18:45.047246 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:18:45.052213 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jul 2 00:18:45.052240 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jul 2 00:18:45.062538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:18:45.075523 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 00:18:45.075786 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:18:45.122619 kernel: loop2: detected capacity change from 0 to 80568 Jul 2 00:18:45.124736 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:18:45.140411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:18:45.188151 kernel: loop3: detected capacity change from 0 to 139904 Jul 2 00:18:45.190706 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 2 00:18:45.190735 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 2 00:18:45.198891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:18:45.228528 kernel: loop4: detected capacity change from 0 to 209816 Jul 2 00:18:45.252572 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:18:45.259912 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:18:45.260680 (sd-merge)[1189]: Merged extensions into '/usr'. Jul 2 00:18:45.266081 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:18:45.266100 systemd[1]: Reloading... Jul 2 00:18:45.337635 zram_generator::config[1214]: No configuration found. Jul 2 00:18:45.498674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:18:45.558758 systemd[1]: Reloading finished in 292 ms. Jul 2 00:18:45.565975 ldconfig[1158]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:18:45.599861 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:18:45.616861 systemd[1]: Starting ensure-sysext.service... Jul 2 00:18:45.619704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:18:45.623352 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:18:45.631170 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:18:45.631190 systemd[1]: Reloading... Jul 2 00:18:45.651913 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:18:45.652288 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:18:45.653588 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:18:45.653996 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 2 00:18:45.654086 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 2 00:18:45.659561 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:18:45.659844 systemd-tmpfiles[1251]: Skipping /boot Jul 2 00:18:45.676848 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:18:45.678657 systemd-tmpfiles[1251]: Skipping /boot Jul 2 00:18:45.711495 zram_generator::config[1286]: No configuration found. Jul 2 00:18:45.849589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:18:45.915065 systemd[1]: Reloading finished in 283 ms. Jul 2 00:18:45.936376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:18:45.958338 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:18:46.034803 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:18:46.038241 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:18:46.044950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:18:46.063891 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:18:46.074061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:46.074282 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:18:46.075319 augenrules[1335]: No rules Jul 2 00:18:46.076372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:18:46.079906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:18:46.082878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:18:46.084111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:18:46.086768 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:18:46.087852 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:46.089749 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:18:46.091776 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:18:46.094059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:18:46.094421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:18:46.100841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:18:46.101030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:18:46.103010 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:18:46.103228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:18:46.123538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:46.123795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:18:46.130071 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:18:46.136845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:18:46.143669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:18:46.145363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:18:46.145559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:46.147139 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:18:46.158076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:18:46.158307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:18:46.161589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:18:46.162588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:18:46.165504 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:18:46.165735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:18:46.178459 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:18:46.195198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:18:46.197462 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:18:46.211797 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:46.212232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:18:46.214136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:18:46.217823 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:18:46.223391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:18:46.228129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:18:46.229606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:18:46.233727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:18:46.238109 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:18:46.239510 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:18:46.239746 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:18:46.241890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:18:46.242144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:18:46.244694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:18:46.244934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:18:46.247198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:18:46.247432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:18:46.250256 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:18:46.250509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:18:46.254772 systemd[1]: Finished ensure-sysext.service. Jul 2 00:18:46.258174 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:18:46.268363 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:18:46.268507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:18:46.285393 systemd-resolved[1328]: Positive Trust Anchors: Jul 2 00:18:46.285430 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:18:46.285475 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:18:46.285945 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:18:46.289358 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Jul 2 00:18:46.291864 systemd-resolved[1328]: Defaulting to hostname 'linux'. Jul 2 00:18:46.294579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:18:46.296458 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:18:46.324111 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:18:46.335853 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:18:46.374055 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:18:46.376329 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:18:46.376384 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:18:46.391135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1390) Jul 2 00:18:46.393513 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1392) Jul 2 00:18:46.422626 systemd-networkd[1380]: lo: Link UP Jul 2 00:18:46.422644 systemd-networkd[1380]: lo: Gained carrier Jul 2 00:18:46.424827 systemd-networkd[1380]: Enumeration completed Jul 2 00:18:46.425322 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:18:46.425333 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:18:46.425623 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:18:46.427080 systemd[1]: Reached target network.target - Network. Jul 2 00:18:46.430732 systemd-networkd[1380]: eth0: Link UP Jul 2 00:18:46.430746 systemd-networkd[1380]: eth0: Gained carrier Jul 2 00:18:46.430772 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:18:46.435788 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:18:46.472328 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:18:46.472507 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:18:46.474615 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Jul 2 00:18:46.476811 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:18:46.476883 systemd-timesyncd[1375]: Initial clock synchronization to Tue 2024-07-02 00:18:46.407220 UTC. Jul 2 00:18:46.482672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:18:46.485874 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 00:18:46.486134 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:18:46.487966 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:18:46.502963 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:18:46.516528 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:18:46.539724 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:18:46.550870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:18:46.566070 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:18:46.566388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:18:46.590519 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:18:46.592177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:18:46.699718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:18:46.705939 kernel: kvm_amd: TSC scaling supported Jul 2 00:18:46.706106 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:18:46.706132 kernel: kvm_amd: Nested Paging enabled Jul 2 00:18:46.706150 kernel: kvm_amd: LBR virtualization supported Jul 2 00:18:46.706606 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:18:46.707771 kernel: kvm_amd: Virtual GIF supported Jul 2 00:18:46.753509 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:18:46.789384 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:18:46.800841 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:18:46.813305 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:18:46.847364 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:18:46.850372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:18:46.851891 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:18:46.853447 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:18:46.855107 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:18:46.857030 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:18:46.858850 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:18:46.860516 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:18:46.862160 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:18:46.862207 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:18:46.863436 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:18:46.865664 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:18:46.869324 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:18:46.881276 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:18:46.884352 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:18:46.886604 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:18:46.888098 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:18:46.889347 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:18:46.890662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:18:46.890696 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:18:46.891975 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:18:46.894929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:18:46.897649 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:18:46.900406 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:18:46.903769 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:18:46.905358 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:18:46.909596 jq[1431]: false Jul 2 00:18:46.909994 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:18:46.914796 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:18:46.940578 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:18:46.944200 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:18:46.952283 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:18:46.956539 extend-filesystems[1432]: Found loop3 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found loop4 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found loop5 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found sr0 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda1 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda2 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda3 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found usr Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda4 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda6 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda7 Jul 2 00:18:46.961381 extend-filesystems[1432]: Found vda9 Jul 2 00:18:46.961381 extend-filesystems[1432]: Checking size of /dev/vda9 Jul 2 00:18:46.958015 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:18:46.967040 dbus-daemon[1430]: [system] SELinux support is enabled Jul 2 00:18:46.958732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:18:46.960015 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:18:46.964673 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:18:46.968365 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:18:46.971911 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:18:46.977941 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:18:46.978210 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:18:46.978668 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:18:46.978897 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:18:46.983205 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:18:46.983435 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:18:46.991518 jq[1447]: true Jul 2 00:18:46.997899 extend-filesystems[1432]: Resized partition /dev/vda9 Jul 2 00:18:47.000553 update_engine[1445]: I0702 00:18:46.999454 1445 main.cc:92] Flatcar Update Engine starting Jul 2 00:18:47.003475 extend-filesystems[1459]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:18:47.010832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1390) Jul 2 00:18:47.010866 update_engine[1445]: I0702 00:18:47.008749 1445 update_check_scheduler.cc:74] Next update check in 5m24s Jul 2 00:18:47.007134 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:18:47.007158 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:18:47.017517 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:18:47.015085 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:18:47.015110 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:18:47.018811 jq[1458]: true Jul 2 00:18:47.028031 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:18:47.043713 tar[1452]: linux-amd64/helm Jul 2 00:18:47.058448 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:18:47.075195 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:18:47.079530 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:18:47.079559 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:18:47.080178 systemd-logind[1443]: New seat seat0. Jul 2 00:18:47.081327 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:18:47.085509 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:18:47.140699 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:18:47.140699 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:18:47.140699 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:18:47.150786 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jul 2 00:18:47.142310 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:18:47.143263 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:18:47.143556 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:18:47.157173 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:18:47.158572 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:18:47.163738 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:18:47.280638 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:18:47.328164 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:18:47.337937 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:18:47.369451 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:18:47.369827 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:18:47.390089 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:18:47.413027 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:18:47.421917 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:18:47.449957 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:18:47.451543 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:18:47.593386 containerd[1464]: time="2024-07-02T00:18:47.592861865Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:18:47.637344 containerd[1464]: time="2024-07-02T00:18:47.637194106Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:18:47.637344 containerd[1464]: time="2024-07-02T00:18:47.637279185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:18:47.640356 containerd[1464]: time="2024-07-02T00:18:47.639959147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:18:47.640356 containerd[1464]: time="2024-07-02T00:18:47.640000465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:18:47.640356 containerd[1464]: time="2024-07-02T00:18:47.640325430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:18:47.640356 containerd[1464]: time="2024-07-02T00:18:47.640346438Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:18:47.641000 containerd[1464]: time="2024-07-02T00:18:47.640695972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:18:47.641000 containerd[1464]: time="2024-07-02T00:18:47.640852754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:18:47.641000 containerd[1464]: time="2024-07-02T00:18:47.640871767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:18:47.641081 containerd[1464]: time="2024-07-02T00:18:47.641036419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:18:47.641364 containerd[1464]: time="2024-07-02T00:18:47.641336865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:18:47.641417 containerd[1464]: time="2024-07-02T00:18:47.641363190Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:18:47.641417 containerd[1464]: time="2024-07-02T00:18:47.641376128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:18:47.641564 containerd[1464]: time="2024-07-02T00:18:47.641536630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:18:47.641564 containerd[1464]: time="2024-07-02T00:18:47.641556411Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:18:47.641671 containerd[1464]: time="2024-07-02T00:18:47.641650838Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:18:47.641704 containerd[1464]: time="2024-07-02T00:18:47.641669860Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:18:47.659696 containerd[1464]: time="2024-07-02T00:18:47.659635251Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:18:47.659696 containerd[1464]: time="2024-07-02T00:18:47.659696130Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:18:47.659696 containerd[1464]: time="2024-07-02T00:18:47.659718235Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:18:47.659917 containerd[1464]: time="2024-07-02T00:18:47.659770835Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:18:47.659917 containerd[1464]: time="2024-07-02T00:18:47.659797748Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:18:47.659917 containerd[1464]: time="2024-07-02T00:18:47.659817589Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:18:47.659917 containerd[1464]: time="2024-07-02T00:18:47.659834757Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:18:47.660161 containerd[1464]: time="2024-07-02T00:18:47.660053804Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:18:47.660161 containerd[1464]: time="2024-07-02T00:18:47.660091700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:18:47.660161 containerd[1464]: time="2024-07-02T00:18:47.660137417Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:18:47.660247 containerd[1464]: time="2024-07-02T00:18:47.660161727Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:18:47.660247 containerd[1464]: time="2024-07-02T00:18:47.660190286Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660247 containerd[1464]: time="2024-07-02T00:18:47.660215274Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660247 containerd[1464]: time="2024-07-02T00:18:47.660234397Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660413 containerd[1464]: time="2024-07-02T00:18:47.660250288Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660413 containerd[1464]: time="2024-07-02T00:18:47.660270627Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660413 containerd[1464]: time="2024-07-02T00:18:47.660288961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660413 containerd[1464]: time="2024-07-02T00:18:47.660306837Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660413 containerd[1464]: time="2024-07-02T00:18:47.660323296Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:18:47.660563 containerd[1464]: time="2024-07-02T00:18:47.660468537Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:18:47.660859 containerd[1464]: time="2024-07-02T00:18:47.660820116Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:18:47.660893 containerd[1464]: time="2024-07-02T00:18:47.660862401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.660893 containerd[1464]: time="2024-07-02T00:18:47.660882840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:18:47.660950 containerd[1464]: time="2024-07-02T00:18:47.660921006Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:18:47.661031 containerd[1464]: time="2024-07-02T00:18:47.660994713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661062 containerd[1464]: time="2024-07-02T00:18:47.661025268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661062 containerd[1464]: time="2024-07-02T00:18:47.661049408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661109 containerd[1464]: time="2024-07-02T00:18:47.661061677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661109 containerd[1464]: time="2024-07-02T00:18:47.661075224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661109 containerd[1464]: time="2024-07-02T00:18:47.661087025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661109 containerd[1464]: time="2024-07-02T00:18:47.661098936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661109 containerd[1464]: time="2024-07-02T00:18:47.661109559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661230 containerd[1464]: time="2024-07-02T00:18:47.661122377Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:18:47.661326 containerd[1464]: time="2024-07-02T00:18:47.661291179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661326 containerd[1464]: time="2024-07-02T00:18:47.661318910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661379 containerd[1464]: time="2024-07-02T00:18:47.661336536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661379 containerd[1464]: time="2024-07-02T00:18:47.661353484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661379 containerd[1464]: time="2024-07-02T00:18:47.661370313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661456 containerd[1464]: time="2024-07-02T00:18:47.661389286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661456 containerd[1464]: time="2024-07-02T00:18:47.661407900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661456 containerd[1464]: time="2024-07-02T00:18:47.661424080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:18:47.661952 containerd[1464]: time="2024-07-02T00:18:47.661857866Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:18:47.661952 containerd[1464]: time="2024-07-02T00:18:47.661943972Z" level=info msg="Connect containerd service" Jul 2 00:18:47.662183 containerd[1464]: time="2024-07-02T00:18:47.661976651Z" level=info msg="using legacy CRI server" Jul 2 00:18:47.662183 containerd[1464]: time="2024-07-02T00:18:47.661987604Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:18:47.662183 containerd[1464]: time="2024-07-02T00:18:47.662106869Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:18:47.665936 containerd[1464]: time="2024-07-02T00:18:47.665871783Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:18:47.666068 containerd[1464]: time="2024-07-02T00:18:47.665967067Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:18:47.666068 containerd[1464]: time="2024-07-02T00:18:47.665991995Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:18:47.666068 containerd[1464]: time="2024-07-02T00:18:47.666006031Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:18:47.666068 containerd[1464]: time="2024-07-02T00:18:47.666024535Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:18:47.666237 containerd[1464]: time="2024-07-02T00:18:47.666191202Z" level=info msg="Start subscribing containerd event" Jul 2 00:18:47.666287 containerd[1464]: time="2024-07-02T00:18:47.666260470Z" level=info msg="Start recovering state" Jul 2 00:18:47.666529 containerd[1464]: time="2024-07-02T00:18:47.666425571Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:18:47.666529 containerd[1464]: time="2024-07-02T00:18:47.666507080Z" level=info msg="Start event monitor" Jul 2 00:18:47.666529 containerd[1464]: time="2024-07-02T00:18:47.666527329Z" level=info msg="Start snapshots syncer" Jul 2 00:18:47.666702 containerd[1464]: time="2024-07-02T00:18:47.666539479Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:18:47.666702 containerd[1464]: time="2024-07-02T00:18:47.666554841Z" level=info msg="Start streaming server" Jul 2 00:18:47.666702 containerd[1464]: time="2024-07-02T00:18:47.666538243Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:18:47.666769 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:18:47.667335 containerd[1464]: time="2024-07-02T00:18:47.667310550Z" level=info msg="containerd successfully booted in 0.075759s" Jul 2 00:18:47.825362 tar[1452]: linux-amd64/LICENSE Jul 2 00:18:47.825362 tar[1452]: linux-amd64/README.md Jul 2 00:18:47.827089 systemd-networkd[1380]: eth0: Gained IPv6LL Jul 2 00:18:47.837449 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:18:47.840930 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:18:47.844619 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:18:47.848448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:18:47.851514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:18:47.853530 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:18:47.878621 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:18:47.879012 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:18:47.881671 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:18:47.885332 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:18:49.178959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:18:49.180947 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:18:49.183293 systemd[1]: Startup finished in 1.716s (kernel) + 8.894s (initrd) + 5.462s (userspace) = 16.073s. Jul 2 00:18:49.185122 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:18:49.357344 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:18:49.358699 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:51086.service - OpenSSH per-connection server daemon (10.0.0.1:51086). Jul 2 00:18:49.420163 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 51086 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:18:49.437833 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:49.450450 systemd-logind[1443]: New session 1 of user core. Jul 2 00:18:49.452076 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:18:49.461932 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:18:49.486235 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:18:49.496226 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:18:49.500294 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:49.623893 systemd[1557]: Queued start job for default target default.target. Jul 2 00:18:49.633977 systemd[1557]: Created slice app.slice - User Application Slice. Jul 2 00:18:49.634007 systemd[1557]: Reached target paths.target - Paths. Jul 2 00:18:49.634021 systemd[1557]: Reached target timers.target - Timers. Jul 2 00:18:49.635744 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:18:49.652651 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:18:49.652811 systemd[1557]: Reached target sockets.target - Sockets. Jul 2 00:18:49.652828 systemd[1557]: Reached target basic.target - Basic System. Jul 2 00:18:49.652883 systemd[1557]: Reached target default.target - Main User Target. Jul 2 00:18:49.652928 systemd[1557]: Startup finished in 144ms. Jul 2 00:18:49.653002 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:18:49.661645 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:18:49.727777 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:51092.service - OpenSSH per-connection server daemon (10.0.0.1:51092). Jul 2 00:18:49.766447 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 51092 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:18:49.768745 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:49.774229 systemd-logind[1443]: New session 2 of user core. Jul 2 00:18:49.784702 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:18:49.849988 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:49.862849 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:51092.service: Deactivated successfully. Jul 2 00:18:49.865309 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:18:49.867587 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:18:49.877881 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:51100.service - OpenSSH per-connection server daemon (10.0.0.1:51100). Jul 2 00:18:49.879792 systemd-logind[1443]: Removed session 2. Jul 2 00:18:49.915021 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 51100 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:18:49.916840 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:49.921618 systemd-logind[1443]: New session 3 of user core. Jul 2 00:18:49.944886 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:18:49.999528 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:50.011959 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:51100.service: Deactivated successfully. Jul 2 00:18:50.014171 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:18:50.015901 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:18:50.027033 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:51108.service - OpenSSH per-connection server daemon (10.0.0.1:51108). Jul 2 00:18:50.028237 systemd-logind[1443]: Removed session 3. Jul 2 00:18:50.066415 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 51108 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:18:50.068117 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:50.073100 systemd-logind[1443]: New session 4 of user core. Jul 2 00:18:50.083630 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:18:50.148800 sshd[1584]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:50.164146 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:51108.service: Deactivated successfully. Jul 2 00:18:50.166571 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:18:50.168902 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:18:50.170373 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:51116.service - OpenSSH per-connection server daemon (10.0.0.1:51116). Jul 2 00:18:50.171737 systemd-logind[1443]: Removed session 4. Jul 2 00:18:50.221031 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 51116 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:18:50.223688 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:50.228235 systemd-logind[1443]: New session 5 of user core. Jul 2 00:18:50.236667 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:18:50.309726 kubelet[1542]: E0702 00:18:50.309472 1542 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:18:50.311215 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:18:50.311639 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:18:50.314750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:18:50.314983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:18:50.315283 systemd[1]: kubelet.service: Consumed 1.843s CPU time. Jul 2 00:18:50.327122 sudo[1596]: pam_unix(sudo:session): session closed for user root Jul 2 00:18:50.329314 sshd[1592]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:50.344746 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:51116.service: Deactivated successfully. Jul 2 00:18:50.346828 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:18:50.348833 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:18:50.350467 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:51124.service - OpenSSH per-connection server daemon (10.0.0.1:51124). Jul 2 00:18:50.351400 systemd-logind[1443]: Removed session 5. Jul 2 00:18:50.384040 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 51124 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:18:50.385839 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:50.390236 systemd-logind[1443]: New session 6 of user core. Jul 2 00:18:50.399641 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:18:50.456751 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:18:50.457125 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:18:50.461461 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 2 00:18:50.468575 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:18:50.468862 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:18:50.492874 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:18:50.494894 auditctl[1609]: No rules Jul 2 00:18:50.495330 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:18:50.495609 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:18:50.498319 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:18:50.533047 augenrules[1627]: No rules Jul 2 00:18:50.535048 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:18:50.536475 sudo[1605]: pam_unix(sudo:session): session closed for user root Jul 2 00:18:50.538251 sshd[1602]: pam_unix(sshd:session): session closed for user core Jul 2 00:18:50.550418 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:51124.service: Deactivated successfully. Jul 2 00:18:50.552219 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:18:50.553738 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:18:50.555155 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:51134.service - OpenSSH per-connection server daemon (10.0.0.1:51134). Jul 2 00:18:50.556054 systemd-logind[1443]: Removed session 6. Jul 2 00:18:50.587745 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 51134 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:18:50.589214 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:18:50.594253 systemd-logind[1443]: New session 7 of user core. Jul 2 00:18:50.603693 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:18:50.658464 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:18:50.658853 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:18:50.983229 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:18:50.986318 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:18:51.435679 dockerd[1649]: time="2024-07-02T00:18:51.435537215Z" level=info msg="Starting up" Jul 2 00:18:53.353891 dockerd[1649]: time="2024-07-02T00:18:53.353815281Z" level=info msg="Loading containers: start." Jul 2 00:18:54.210513 kernel: Initializing XFRM netlink socket Jul 2 00:18:54.341536 systemd-networkd[1380]: docker0: Link UP Jul 2 00:18:54.616216 dockerd[1649]: time="2024-07-02T00:18:54.616141103Z" level=info msg="Loading containers: done." Jul 2 00:18:54.733846 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1889914318-merged.mount: Deactivated successfully. Jul 2 00:18:54.741516 dockerd[1649]: time="2024-07-02T00:18:54.741432732Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:18:54.741753 dockerd[1649]: time="2024-07-02T00:18:54.741723634Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:18:54.741919 dockerd[1649]: time="2024-07-02T00:18:54.741898798Z" level=info msg="Daemon has completed initialization" Jul 2 00:18:54.838785 dockerd[1649]: time="2024-07-02T00:18:54.838714549Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:18:54.840277 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:18:55.840596 containerd[1464]: time="2024-07-02T00:18:55.840531809Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:18:58.697742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626607242.mount: Deactivated successfully. Jul 2 00:19:00.526319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:19:00.535837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:00.712627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:00.716830 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:01.223798 kubelet[1829]: E0702 00:19:01.223671 1829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:01.235530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:01.235861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:04.679402 containerd[1464]: time="2024-07-02T00:19:04.679285094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:04.770183 containerd[1464]: time="2024-07-02T00:19:04.770079460Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jul 2 00:19:04.783181 containerd[1464]: time="2024-07-02T00:19:04.783115359Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:04.836239 containerd[1464]: time="2024-07-02T00:19:04.836173931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:04.837866 containerd[1464]: time="2024-07-02T00:19:04.837831080Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 8.997233465s" Jul 2 00:19:04.837944 containerd[1464]: time="2024-07-02T00:19:04.837877549Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 00:19:04.870046 containerd[1464]: time="2024-07-02T00:19:04.869985146Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:19:11.276230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:19:11.286701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:11.556258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:11.571928 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:11.652294 kubelet[1880]: E0702 00:19:11.652201 1880 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:11.657343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:11.657579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:14.786588 containerd[1464]: time="2024-07-02T00:19:14.786470646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:14.817785 containerd[1464]: time="2024-07-02T00:19:14.817654959Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jul 2 00:19:14.848400 containerd[1464]: time="2024-07-02T00:19:14.848331115Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:14.921032 containerd[1464]: time="2024-07-02T00:19:14.920946404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:14.922653 containerd[1464]: time="2024-07-02T00:19:14.922536570Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 10.052475212s" Jul 2 00:19:14.922653 containerd[1464]: time="2024-07-02T00:19:14.922635491Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 00:19:14.974251 containerd[1464]: time="2024-07-02T00:19:14.974195148Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:19:17.942402 containerd[1464]: time="2024-07-02T00:19:17.942324635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:17.981740 containerd[1464]: time="2024-07-02T00:19:17.981643021Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jul 2 00:19:18.012916 containerd[1464]: time="2024-07-02T00:19:18.012852012Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:18.043095 containerd[1464]: time="2024-07-02T00:19:18.043034775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:18.044213 containerd[1464]: time="2024-07-02T00:19:18.044172261Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 3.069922612s" Jul 2 00:19:18.044272 containerd[1464]: time="2024-07-02T00:19:18.044212614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 00:19:18.069010 containerd[1464]: time="2024-07-02T00:19:18.068960208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:19:21.234383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330762587.mount: Deactivated successfully. Jul 2 00:19:21.776201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:19:21.783808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:21.951070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:21.957134 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:22.578658 kubelet[1923]: E0702 00:19:22.578591 1923 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:22.583922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:22.584186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:22.678443 containerd[1464]: time="2024-07-02T00:19:22.678354005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:22.685500 containerd[1464]: time="2024-07-02T00:19:22.685396207Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 00:19:22.712539 containerd[1464]: time="2024-07-02T00:19:22.712447934Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:22.760014 containerd[1464]: time="2024-07-02T00:19:22.759925659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:22.760877 containerd[1464]: time="2024-07-02T00:19:22.760814353Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 4.691792824s" Jul 2 00:19:22.760921 containerd[1464]: time="2024-07-02T00:19:22.760877093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:19:22.785584 containerd[1464]: time="2024-07-02T00:19:22.785532900Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:19:25.022456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount374722746.mount: Deactivated successfully. Jul 2 00:19:25.354514 containerd[1464]: time="2024-07-02T00:19:25.354386963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:25.405830 containerd[1464]: time="2024-07-02T00:19:25.405729573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:19:25.450616 containerd[1464]: time="2024-07-02T00:19:25.450535028Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:25.535038 containerd[1464]: time="2024-07-02T00:19:25.534875902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:25.536001 containerd[1464]: time="2024-07-02T00:19:25.535932920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.750348916s" Jul 2 00:19:25.536001 containerd[1464]: time="2024-07-02T00:19:25.535993141Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:19:25.559957 containerd[1464]: time="2024-07-02T00:19:25.559911655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:19:27.821057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645425606.mount: Deactivated successfully. Jul 2 00:19:31.544877 containerd[1464]: time="2024-07-02T00:19:31.544787823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:31.558954 containerd[1464]: time="2024-07-02T00:19:31.558833645Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:19:31.569259 containerd[1464]: time="2024-07-02T00:19:31.569174606Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:31.590911 containerd[1464]: time="2024-07-02T00:19:31.590848075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:31.592558 containerd[1464]: time="2024-07-02T00:19:31.592468775Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.032500663s" Jul 2 00:19:31.592558 containerd[1464]: time="2024-07-02T00:19:31.592551229Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:19:31.617573 containerd[1464]: time="2024-07-02T00:19:31.617506854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:19:32.148721 update_engine[1445]: I0702 00:19:32.148626 1445 update_attempter.cc:509] Updating boot flags... Jul 2 00:19:32.190516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2008) Jul 2 00:19:32.236535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2007) Jul 2 00:19:32.255520 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2007) Jul 2 00:19:32.776149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:19:32.785941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:32.934982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:32.940974 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:19:33.098547 kubelet[2025]: E0702 00:19:33.097871 2025 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:19:33.102375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:19:33.102589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:19:37.838439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008898661.mount: Deactivated successfully. Jul 2 00:19:38.529804 containerd[1464]: time="2024-07-02T00:19:38.529724425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.550176 containerd[1464]: time="2024-07-02T00:19:38.550082685Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jul 2 00:19:38.565399 containerd[1464]: time="2024-07-02T00:19:38.565372354Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.577248 containerd[1464]: time="2024-07-02T00:19:38.577201630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:19:38.578407 containerd[1464]: time="2024-07-02T00:19:38.578347004Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 6.960784347s" Jul 2 00:19:38.578455 containerd[1464]: time="2024-07-02T00:19:38.578407447Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 00:19:41.236511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:41.246913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:41.267543 systemd[1]: Reloading requested from client PID 2118 ('systemctl') (unit session-7.scope)... Jul 2 00:19:41.267570 systemd[1]: Reloading... Jul 2 00:19:41.362516 zram_generator::config[2156]: No configuration found. Jul 2 00:19:41.623323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:41.730961 systemd[1]: Reloading finished in 463 ms. Jul 2 00:19:41.784329 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:19:41.784431 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:19:41.784745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:41.802942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:41.953711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:41.958903 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:19:42.015253 kubelet[2204]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:42.015253 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:19:42.015253 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:42.015679 kubelet[2204]: I0702 00:19:42.015302 2204 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:19:42.429647 kubelet[2204]: I0702 00:19:42.429609 2204 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:19:42.429647 kubelet[2204]: I0702 00:19:42.429638 2204 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:19:42.429858 kubelet[2204]: I0702 00:19:42.429843 2204 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:19:42.446895 kubelet[2204]: I0702 00:19:42.446680 2204 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:19:42.446895 kubelet[2204]: E0702 00:19:42.446810 2204 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.464569 kubelet[2204]: I0702 00:19:42.464535 2204 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:19:42.464878 kubelet[2204]: I0702 00:19:42.464834 2204 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:19:42.465079 kubelet[2204]: I0702 00:19:42.465038 2204 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:19:42.465079 kubelet[2204]: I0702 00:19:42.465074 2204 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:19:42.465260 kubelet[2204]: I0702 00:19:42.465087 2204 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:19:42.466092 kubelet[2204]: I0702 00:19:42.466055 2204 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:42.467264 kubelet[2204]: I0702 00:19:42.467229 2204 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:19:42.467264 kubelet[2204]: I0702 00:19:42.467252 2204 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:19:42.467344 kubelet[2204]: I0702 00:19:42.467282 2204 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:19:42.467344 kubelet[2204]: I0702 00:19:42.467297 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:19:42.468634 kubelet[2204]: W0702 00:19:42.468157 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.468634 kubelet[2204]: E0702 00:19:42.468242 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.468634 kubelet[2204]: I0702 00:19:42.468549 2204 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:19:42.469099 kubelet[2204]: W0702 00:19:42.469050 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.469099 kubelet[2204]: E0702 00:19:42.469101 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.470307 kubelet[2204]: W0702 00:19:42.470276 2204 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:19:42.471157 kubelet[2204]: I0702 00:19:42.471128 2204 server.go:1232] "Started kubelet" Jul 2 00:19:42.474524 kubelet[2204]: E0702 00:19:42.472111 2204 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:19:42.474524 kubelet[2204]: E0702 00:19:42.472141 2204 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:19:42.474524 kubelet[2204]: I0702 00:19:42.472221 2204 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:19:42.474524 kubelet[2204]: I0702 00:19:42.472414 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:19:42.474524 kubelet[2204]: I0702 00:19:42.472517 2204 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:19:42.474524 kubelet[2204]: I0702 00:19:42.472572 2204 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:19:42.474783 kubelet[2204]: E0702 00:19:42.473183 2204 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3d5e31a2e766", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 19, 42, 471087974, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 19, 42, 471087974, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.84:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.84:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:19:42.474783 kubelet[2204]: I0702 00:19:42.473270 2204 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:19:42.474783 kubelet[2204]: I0702 00:19:42.473352 2204 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:19:42.474783 kubelet[2204]: I0702 00:19:42.473423 2204 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:19:42.474783 kubelet[2204]: I0702 00:19:42.473594 2204 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:19:42.474783 kubelet[2204]: W0702 00:19:42.473658 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.475038 kubelet[2204]: E0702 00:19:42.473687 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.475038 kubelet[2204]: E0702 00:19:42.473872 2204 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Jul 2 00:19:42.493147 kubelet[2204]: I0702 00:19:42.493101 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:19:42.494439 kubelet[2204]: I0702 00:19:42.494272 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:19:42.494439 kubelet[2204]: I0702 00:19:42.494299 2204 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:19:42.494439 kubelet[2204]: I0702 00:19:42.494319 2204 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:19:42.494439 kubelet[2204]: E0702 00:19:42.494373 2204 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:19:42.495088 kubelet[2204]: W0702 00:19:42.494934 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.495088 kubelet[2204]: E0702 00:19:42.494978 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:42.507096 kubelet[2204]: I0702 00:19:42.507071 2204 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:19:42.507096 kubelet[2204]: I0702 00:19:42.507097 2204 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:19:42.507165 kubelet[2204]: I0702 00:19:42.507118 2204 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:42.575629 kubelet[2204]: I0702 00:19:42.575579 2204 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:19:42.575924 kubelet[2204]: E0702 00:19:42.575901 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 2 00:19:42.595208 kubelet[2204]: E0702 00:19:42.595112 2204 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:19:42.674892 kubelet[2204]: E0702 00:19:42.674840 2204 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Jul 2 00:19:42.777558 kubelet[2204]: I0702 00:19:42.777532 2204 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:19:42.777849 kubelet[2204]: E0702 00:19:42.777830 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 2 00:19:42.796180 kubelet[2204]: E0702 00:19:42.796076 2204 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:19:43.008160 kubelet[2204]: I0702 00:19:43.008029 2204 policy_none.go:49] "None policy: Start" Jul 2 00:19:43.009219 kubelet[2204]: I0702 00:19:43.009176 2204 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:19:43.009301 kubelet[2204]: I0702 00:19:43.009232 2204 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:19:43.020721 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:19:43.048130 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:19:43.052701 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:19:43.068074 kubelet[2204]: I0702 00:19:43.068023 2204 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:19:43.068907 kubelet[2204]: I0702 00:19:43.068387 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:19:43.069413 kubelet[2204]: E0702 00:19:43.069389 2204 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:19:43.075683 kubelet[2204]: E0702 00:19:43.075647 2204 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Jul 2 00:19:43.179751 kubelet[2204]: I0702 00:19:43.179704 2204 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:19:43.180169 kubelet[2204]: E0702 00:19:43.180142 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 2 00:19:43.196459 kubelet[2204]: I0702 00:19:43.196377 2204 topology_manager.go:215] "Topology Admit Handler" podUID="2596c054a4ef7052cc0a34f335a9f67c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:19:43.197806 kubelet[2204]: I0702 00:19:43.197755 2204 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:19:43.198636 kubelet[2204]: I0702 00:19:43.198615 2204 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:19:43.205699 systemd[1]: Created slice kubepods-burstable-pod2596c054a4ef7052cc0a34f335a9f67c.slice - libcontainer container kubepods-burstable-pod2596c054a4ef7052cc0a34f335a9f67c.slice. Jul 2 00:19:43.237919 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jul 2 00:19:43.257850 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jul 2 00:19:43.278366 kubelet[2204]: I0702 00:19:43.278313 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2596c054a4ef7052cc0a34f335a9f67c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2596c054a4ef7052cc0a34f335a9f67c\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:43.278366 kubelet[2204]: I0702 00:19:43.278372 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:43.278639 kubelet[2204]: I0702 00:19:43.278397 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:43.278639 kubelet[2204]: I0702 00:19:43.278421 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2596c054a4ef7052cc0a34f335a9f67c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2596c054a4ef7052cc0a34f335a9f67c\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:43.278639 kubelet[2204]: I0702 00:19:43.278441 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2596c054a4ef7052cc0a34f335a9f67c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2596c054a4ef7052cc0a34f335a9f67c\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:43.278639 kubelet[2204]: I0702 00:19:43.278466 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:43.278639 kubelet[2204]: I0702 00:19:43.278509 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:19:43.278772 kubelet[2204]: I0702 00:19:43.278539 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:43.278772 kubelet[2204]: I0702 00:19:43.278565 2204 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:43.478964 kubelet[2204]: W0702 00:19:43.478771 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:43.478964 kubelet[2204]: E0702 00:19:43.478866 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:43.534996 kubelet[2204]: E0702 00:19:43.534930 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:43.535885 containerd[1464]: time="2024-07-02T00:19:43.535821432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2596c054a4ef7052cc0a34f335a9f67c,Namespace:kube-system,Attempt:0,}" Jul 2 00:19:43.555452 kubelet[2204]: E0702 00:19:43.555379 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:43.556229 containerd[1464]: time="2024-07-02T00:19:43.556096823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 00:19:43.560620 kubelet[2204]: E0702 00:19:43.560567 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:43.561522 containerd[1464]: time="2024-07-02T00:19:43.561440174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 00:19:43.669565 kubelet[2204]: W0702 00:19:43.669512 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:43.669565 kubelet[2204]: E0702 00:19:43.669564 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:43.792029 kubelet[2204]: W0702 00:19:43.791935 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:43.792029 kubelet[2204]: E0702 00:19:43.792017 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:43.877154 kubelet[2204]: E0702 00:19:43.877103 2204 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Jul 2 00:19:43.982511 kubelet[2204]: I0702 00:19:43.982430 2204 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:19:43.982944 kubelet[2204]: E0702 00:19:43.982906 2204 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 2 00:19:44.004674 kubelet[2204]: W0702 00:19:44.004598 2204 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:44.004674 kubelet[2204]: E0702 00:19:44.004676 2204 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:44.486267 kubelet[2204]: E0702 00:19:44.486197 2204 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.84:6443: connect: connection refused Jul 2 00:19:44.542606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035067885.mount: Deactivated successfully. Jul 2 00:19:44.552474 containerd[1464]: time="2024-07-02T00:19:44.552391673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:44.553780 containerd[1464]: time="2024-07-02T00:19:44.553722605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:44.554823 containerd[1464]: time="2024-07-02T00:19:44.554760328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:19:44.555966 containerd[1464]: time="2024-07-02T00:19:44.555911432Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:44.558057 containerd[1464]: time="2024-07-02T00:19:44.557993198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:19:44.559422 containerd[1464]: time="2024-07-02T00:19:44.559353532Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:44.560329 containerd[1464]: time="2024-07-02T00:19:44.560263239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:19:44.567963 containerd[1464]: time="2024-07-02T00:19:44.567896173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:19:44.570357 containerd[1464]: time="2024-07-02T00:19:44.569232363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.012984976s" Jul 2 00:19:44.572959 containerd[1464]: time="2024-07-02T00:19:44.572663004Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.011054765s" Jul 2 00:19:44.576767 containerd[1464]: time="2024-07-02T00:19:44.576718709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.040758182s" Jul 2 00:19:44.785263 containerd[1464]: time="2024-07-02T00:19:44.784914667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:44.785263 containerd[1464]: time="2024-07-02T00:19:44.785012691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:44.785263 containerd[1464]: time="2024-07-02T00:19:44.785077075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:44.785263 containerd[1464]: time="2024-07-02T00:19:44.785096548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:44.787008 containerd[1464]: time="2024-07-02T00:19:44.785205771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:44.787008 containerd[1464]: time="2024-07-02T00:19:44.785258686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:44.787008 containerd[1464]: time="2024-07-02T00:19:44.785302593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:44.787008 containerd[1464]: time="2024-07-02T00:19:44.785322598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:44.787921 containerd[1464]: time="2024-07-02T00:19:44.787823235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:19:44.787921 containerd[1464]: time="2024-07-02T00:19:44.787880877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:44.788006 containerd[1464]: time="2024-07-02T00:19:44.787905701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:19:44.789532 containerd[1464]: time="2024-07-02T00:19:44.788530325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:19:44.812673 systemd[1]: Started cri-containerd-b77791cf7a8858f871f0150f41b1b2a89952b4cd6f6f6555ea43dcf1adcc82ab.scope - libcontainer container b77791cf7a8858f871f0150f41b1b2a89952b4cd6f6f6555ea43dcf1adcc82ab. Jul 2 00:19:44.818049 systemd[1]: Started cri-containerd-3193f97c4aff18041d2abd397993b0c3f6e09df1a07d4c6c81b2648128cf1e66.scope - libcontainer container 3193f97c4aff18041d2abd397993b0c3f6e09df1a07d4c6c81b2648128cf1e66. Jul 2 00:19:44.819997 systemd[1]: Started cri-containerd-9cc40f7b30fb140cbbe90932e82451eaee32ca352ee2a3e75b1af1d3437f1f51.scope - libcontainer container 9cc40f7b30fb140cbbe90932e82451eaee32ca352ee2a3e75b1af1d3437f1f51. Jul 2 00:19:44.863185 containerd[1464]: time="2024-07-02T00:19:44.863105859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b77791cf7a8858f871f0150f41b1b2a89952b4cd6f6f6555ea43dcf1adcc82ab\"" Jul 2 00:19:44.866502 kubelet[2204]: E0702 00:19:44.865853 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:44.869799 containerd[1464]: time="2024-07-02T00:19:44.869756789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2596c054a4ef7052cc0a34f335a9f67c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3193f97c4aff18041d2abd397993b0c3f6e09df1a07d4c6c81b2648128cf1e66\"" Jul 2 00:19:44.870900 kubelet[2204]: E0702 00:19:44.870881 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:44.871694 containerd[1464]: time="2024-07-02T00:19:44.871666233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cc40f7b30fb140cbbe90932e82451eaee32ca352ee2a3e75b1af1d3437f1f51\"" Jul 2 00:19:44.872969 containerd[1464]: time="2024-07-02T00:19:44.872933081Z" level=info msg="CreateContainer within sandbox \"b77791cf7a8858f871f0150f41b1b2a89952b4cd6f6f6555ea43dcf1adcc82ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:19:44.873089 kubelet[2204]: E0702 00:19:44.873051 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:44.875191 containerd[1464]: time="2024-07-02T00:19:44.875160766Z" level=info msg="CreateContainer within sandbox \"9cc40f7b30fb140cbbe90932e82451eaee32ca352ee2a3e75b1af1d3437f1f51\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:19:44.875766 containerd[1464]: time="2024-07-02T00:19:44.875741573Z" level=info msg="CreateContainer within sandbox \"3193f97c4aff18041d2abd397993b0c3f6e09df1a07d4c6c81b2648128cf1e66\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:19:44.925770 containerd[1464]: time="2024-07-02T00:19:44.925713713Z" level=info msg="CreateContainer within sandbox \"9cc40f7b30fb140cbbe90932e82451eaee32ca352ee2a3e75b1af1d3437f1f51\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d61e3c5686b669f76d7aa3f9dcfc58b13d8766d053711e483115e624318448a1\"" Jul 2 00:19:44.926540 containerd[1464]: time="2024-07-02T00:19:44.926506223Z" level=info msg="StartContainer for \"d61e3c5686b669f76d7aa3f9dcfc58b13d8766d053711e483115e624318448a1\"" Jul 2 00:19:44.929730 containerd[1464]: time="2024-07-02T00:19:44.929678718Z" level=info msg="CreateContainer within sandbox \"b77791cf7a8858f871f0150f41b1b2a89952b4cd6f6f6555ea43dcf1adcc82ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa6797a1be7749dc62058647cd76bceced062691b3c1bfa27f81456d792c325f\"" Jul 2 00:19:44.930102 containerd[1464]: time="2024-07-02T00:19:44.930055282Z" level=info msg="StartContainer for \"fa6797a1be7749dc62058647cd76bceced062691b3c1bfa27f81456d792c325f\"" Jul 2 00:19:44.936789 containerd[1464]: time="2024-07-02T00:19:44.936733430Z" level=info msg="CreateContainer within sandbox \"3193f97c4aff18041d2abd397993b0c3f6e09df1a07d4c6c81b2648128cf1e66\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3071da2da7bceba0617d1eacbfe3de6dbb10887e4bfcb646e55bae80f4f143e\"" Jul 2 00:19:44.937897 containerd[1464]: time="2024-07-02T00:19:44.937199985Z" level=info msg="StartContainer for \"d3071da2da7bceba0617d1eacbfe3de6dbb10887e4bfcb646e55bae80f4f143e\"" Jul 2 00:19:44.957655 systemd[1]: Started cri-containerd-d61e3c5686b669f76d7aa3f9dcfc58b13d8766d053711e483115e624318448a1.scope - libcontainer container d61e3c5686b669f76d7aa3f9dcfc58b13d8766d053711e483115e624318448a1. Jul 2 00:19:44.967673 systemd[1]: Started cri-containerd-fa6797a1be7749dc62058647cd76bceced062691b3c1bfa27f81456d792c325f.scope - libcontainer container fa6797a1be7749dc62058647cd76bceced062691b3c1bfa27f81456d792c325f. Jul 2 00:19:44.975622 systemd[1]: Started cri-containerd-d3071da2da7bceba0617d1eacbfe3de6dbb10887e4bfcb646e55bae80f4f143e.scope - libcontainer container d3071da2da7bceba0617d1eacbfe3de6dbb10887e4bfcb646e55bae80f4f143e. Jul 2 00:19:45.015509 containerd[1464]: time="2024-07-02T00:19:45.013259411Z" level=info msg="StartContainer for \"d61e3c5686b669f76d7aa3f9dcfc58b13d8766d053711e483115e624318448a1\" returns successfully" Jul 2 00:19:45.032994 containerd[1464]: time="2024-07-02T00:19:45.032937609Z" level=info msg="StartContainer for \"d3071da2da7bceba0617d1eacbfe3de6dbb10887e4bfcb646e55bae80f4f143e\" returns successfully" Jul 2 00:19:45.037728 containerd[1464]: time="2024-07-02T00:19:45.037559697Z" level=info msg="StartContainer for \"fa6797a1be7749dc62058647cd76bceced062691b3c1bfa27f81456d792c325f\" returns successfully" Jul 2 00:19:45.507470 kubelet[2204]: E0702 00:19:45.507360 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:45.513004 kubelet[2204]: E0702 00:19:45.512979 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:45.514107 kubelet[2204]: E0702 00:19:45.514083 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:45.585038 kubelet[2204]: I0702 00:19:45.584993 2204 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:19:46.105008 kubelet[2204]: E0702 00:19:46.104974 2204 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:19:46.190511 kubelet[2204]: I0702 00:19:46.190448 2204 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:19:46.469760 kubelet[2204]: I0702 00:19:46.469475 2204 apiserver.go:52] "Watching apiserver" Jul 2 00:19:46.473577 kubelet[2204]: I0702 00:19:46.473532 2204 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:19:46.521566 kubelet[2204]: E0702 00:19:46.521528 2204 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:46.522198 kubelet[2204]: E0702 00:19:46.522170 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:50.007891 systemd[1]: Reloading requested from client PID 2485 ('systemctl') (unit session-7.scope)... Jul 2 00:19:50.007909 systemd[1]: Reloading... Jul 2 00:19:50.076829 zram_generator::config[2522]: No configuration found. Jul 2 00:19:50.327990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:19:50.421467 systemd[1]: Reloading finished in 413 ms. Jul 2 00:19:50.469266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:50.486927 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:19:50.487250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:50.487303 systemd[1]: kubelet.service: Consumed 1.112s CPU time, 111.5M memory peak, 0B memory swap peak. Jul 2 00:19:50.498925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:19:50.647497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:19:50.652624 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:19:50.709132 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:50.709132 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:19:50.709132 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:19:50.709662 kubelet[2567]: I0702 00:19:50.709188 2567 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:19:50.714579 kubelet[2567]: I0702 00:19:50.714554 2567 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:19:50.714579 kubelet[2567]: I0702 00:19:50.714573 2567 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:19:50.714762 kubelet[2567]: I0702 00:19:50.714748 2567 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:19:50.716262 kubelet[2567]: I0702 00:19:50.716239 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:19:50.717517 kubelet[2567]: I0702 00:19:50.717325 2567 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:19:50.728036 kubelet[2567]: I0702 00:19:50.727751 2567 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:19:50.728036 kubelet[2567]: I0702 00:19:50.727964 2567 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:19:50.728217 kubelet[2567]: I0702 00:19:50.728119 2567 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:19:50.728217 kubelet[2567]: I0702 00:19:50.728147 2567 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:19:50.728217 kubelet[2567]: I0702 00:19:50.728159 2567 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:19:50.728217 kubelet[2567]: I0702 00:19:50.728199 2567 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:50.728429 kubelet[2567]: I0702 00:19:50.728311 2567 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:19:50.728429 kubelet[2567]: I0702 00:19:50.728327 2567 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:19:50.728429 kubelet[2567]: I0702 00:19:50.728357 2567 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:19:50.728429 kubelet[2567]: I0702 00:19:50.728386 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:19:50.731511 kubelet[2567]: I0702 00:19:50.729158 2567 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:19:50.731511 kubelet[2567]: I0702 00:19:50.729959 2567 server.go:1232] "Started kubelet" Jul 2 00:19:50.731511 kubelet[2567]: I0702 00:19:50.730091 2567 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:19:50.731511 kubelet[2567]: I0702 00:19:50.730257 2567 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:19:50.731511 kubelet[2567]: I0702 00:19:50.730558 2567 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:19:50.731511 kubelet[2567]: I0702 00:19:50.730954 2567 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:19:50.732799 kubelet[2567]: I0702 00:19:50.732723 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:19:50.734022 kubelet[2567]: E0702 00:19:50.733776 2567 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:19:50.734199 kubelet[2567]: E0702 00:19:50.733803 2567 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:19:50.736030 kubelet[2567]: I0702 00:19:50.736009 2567 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:19:50.736442 kubelet[2567]: I0702 00:19:50.736424 2567 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:19:50.743821 kubelet[2567]: I0702 00:19:50.743312 2567 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:19:50.755604 kubelet[2567]: I0702 00:19:50.755568 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:19:50.757867 kubelet[2567]: I0702 00:19:50.757853 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:19:50.757959 kubelet[2567]: I0702 00:19:50.757949 2567 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:19:50.758028 kubelet[2567]: I0702 00:19:50.758019 2567 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:19:50.758116 kubelet[2567]: E0702 00:19:50.758105 2567 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:19:50.799536 kubelet[2567]: I0702 00:19:50.799465 2567 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:19:50.799689 kubelet[2567]: I0702 00:19:50.799583 2567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:19:50.799689 kubelet[2567]: I0702 00:19:50.799602 2567 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:19:50.799779 kubelet[2567]: I0702 00:19:50.799764 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:19:50.799824 kubelet[2567]: I0702 00:19:50.799787 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:19:50.799824 kubelet[2567]: I0702 00:19:50.799795 2567 policy_none.go:49] "None policy: Start" Jul 2 00:19:50.800648 kubelet[2567]: I0702 00:19:50.800615 2567 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:19:50.800648 kubelet[2567]: I0702 00:19:50.800654 2567 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:19:50.800842 kubelet[2567]: I0702 00:19:50.800829 2567 state_mem.go:75] "Updated machine memory state" Jul 2 00:19:50.804708 kubelet[2567]: I0702 00:19:50.804670 2567 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:19:50.805061 kubelet[2567]: I0702 00:19:50.805042 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:19:50.840601 kubelet[2567]: I0702 00:19:50.840565 2567 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:19:50.859292 kubelet[2567]: I0702 00:19:50.859259 2567 topology_manager.go:215] "Topology Admit Handler" podUID="2596c054a4ef7052cc0a34f335a9f67c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:19:50.859458 kubelet[2567]: I0702 00:19:50.859395 2567 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:19:50.859458 kubelet[2567]: I0702 00:19:50.859445 2567 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:19:50.885204 kubelet[2567]: I0702 00:19:50.885162 2567 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 00:19:50.885322 kubelet[2567]: I0702 00:19:50.885263 2567 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:19:50.944936 kubelet[2567]: I0702 00:19:50.944787 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2596c054a4ef7052cc0a34f335a9f67c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2596c054a4ef7052cc0a34f335a9f67c\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:50.944936 kubelet[2567]: I0702 00:19:50.944841 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2596c054a4ef7052cc0a34f335a9f67c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2596c054a4ef7052cc0a34f335a9f67c\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:50.944936 kubelet[2567]: I0702 00:19:50.944870 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:50.945176 kubelet[2567]: I0702 00:19:50.944950 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:50.945176 kubelet[2567]: I0702 00:19:50.945001 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:50.945176 kubelet[2567]: I0702 00:19:50.945029 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:19:50.945176 kubelet[2567]: I0702 00:19:50.945061 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2596c054a4ef7052cc0a34f335a9f67c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2596c054a4ef7052cc0a34f335a9f67c\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:50.945176 kubelet[2567]: I0702 00:19:50.945086 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:50.945333 kubelet[2567]: I0702 00:19:50.945120 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:19:51.171865 kubelet[2567]: E0702 00:19:51.171610 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:51.171865 kubelet[2567]: E0702 00:19:51.171773 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:51.171865 kubelet[2567]: E0702 00:19:51.171804 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:51.729274 kubelet[2567]: I0702 00:19:51.729231 2567 apiserver.go:52] "Watching apiserver" Jul 2 00:19:51.740247 kubelet[2567]: I0702 00:19:51.740190 2567 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:19:51.775256 kubelet[2567]: E0702 00:19:51.773504 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:51.775256 kubelet[2567]: E0702 00:19:51.774895 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:51.844436 kubelet[2567]: E0702 00:19:51.844292 2567 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:19:51.844436 kubelet[2567]: I0702 00:19:51.844330 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.844243063 podCreationTimestamp="2024-07-02 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:51.843675992 +0000 UTC m=+1.186488978" watchObservedRunningTime="2024-07-02 00:19:51.844243063 +0000 UTC m=+1.187056039" Jul 2 00:19:51.844859 kubelet[2567]: E0702 00:19:51.844831 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:52.045762 kubelet[2567]: I0702 00:19:52.045732 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.045683509 podCreationTimestamp="2024-07-02 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:52.045519752 +0000 UTC m=+1.388332749" watchObservedRunningTime="2024-07-02 00:19:52.045683509 +0000 UTC m=+1.388496495" Jul 2 00:19:52.045907 kubelet[2567]: I0702 00:19:52.045828 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.045814314 podCreationTimestamp="2024-07-02 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:19:51.891623892 +0000 UTC m=+1.234436878" watchObservedRunningTime="2024-07-02 00:19:52.045814314 +0000 UTC m=+1.388627310" Jul 2 00:19:52.775219 kubelet[2567]: E0702 00:19:52.775185 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:55.459207 kubelet[2567]: E0702 00:19:55.459147 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:55.780630 kubelet[2567]: E0702 00:19:55.780603 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:56.569646 kubelet[2567]: E0702 00:19:56.569611 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:56.782445 kubelet[2567]: E0702 00:19:56.782403 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:19:57.029374 sudo[1638]: pam_unix(sudo:session): session closed for user root Jul 2 00:19:57.036035 sshd[1635]: pam_unix(sshd:session): session closed for user core Jul 2 00:19:57.040533 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:51134.service: Deactivated successfully. Jul 2 00:19:57.042602 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:19:57.042823 systemd[1]: session-7.scope: Consumed 5.422s CPU time, 141.0M memory peak, 0B memory swap peak. Jul 2 00:19:57.043321 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:19:57.044200 systemd-logind[1443]: Removed session 7. Jul 2 00:20:00.829831 kubelet[2567]: E0702 00:20:00.829798 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:01.567471 kubelet[2567]: I0702 00:20:01.567429 2567 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:20:01.567884 containerd[1464]: time="2024-07-02T00:20:01.567844465Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:20:01.568300 kubelet[2567]: I0702 00:20:01.568018 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:20:01.789952 kubelet[2567]: E0702 00:20:01.789919 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:01.878516 kubelet[2567]: I0702 00:20:01.878360 2567 topology_manager.go:215] "Topology Admit Handler" podUID="134b4bb4-f8fe-470e-a0ea-2550db4d6638" podNamespace="kube-system" podName="kube-proxy-rrg8b" Jul 2 00:20:01.884706 systemd[1]: Created slice kubepods-besteffort-pod134b4bb4_f8fe_470e_a0ea_2550db4d6638.slice - libcontainer container kubepods-besteffort-pod134b4bb4_f8fe_470e_a0ea_2550db4d6638.slice. Jul 2 00:20:01.913261 kubelet[2567]: I0702 00:20:01.913214 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/134b4bb4-f8fe-470e-a0ea-2550db4d6638-kube-proxy\") pod \"kube-proxy-rrg8b\" (UID: \"134b4bb4-f8fe-470e-a0ea-2550db4d6638\") " pod="kube-system/kube-proxy-rrg8b" Jul 2 00:20:01.913261 kubelet[2567]: I0702 00:20:01.913263 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/134b4bb4-f8fe-470e-a0ea-2550db4d6638-lib-modules\") pod \"kube-proxy-rrg8b\" (UID: \"134b4bb4-f8fe-470e-a0ea-2550db4d6638\") " pod="kube-system/kube-proxy-rrg8b" Jul 2 00:20:01.913261 kubelet[2567]: I0702 00:20:01.913284 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/134b4bb4-f8fe-470e-a0ea-2550db4d6638-xtables-lock\") pod \"kube-proxy-rrg8b\" (UID: \"134b4bb4-f8fe-470e-a0ea-2550db4d6638\") " pod="kube-system/kube-proxy-rrg8b" Jul 2 00:20:01.913529 kubelet[2567]: I0702 00:20:01.913305 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6542\" (UniqueName: \"kubernetes.io/projected/134b4bb4-f8fe-470e-a0ea-2550db4d6638-kube-api-access-w6542\") pod \"kube-proxy-rrg8b\" (UID: \"134b4bb4-f8fe-470e-a0ea-2550db4d6638\") " pod="kube-system/kube-proxy-rrg8b" Jul 2 00:20:02.021260 kubelet[2567]: E0702 00:20:02.021221 2567 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 00:20:02.021260 kubelet[2567]: E0702 00:20:02.021260 2567 projected.go:198] Error preparing data for projected volume kube-api-access-w6542 for pod kube-system/kube-proxy-rrg8b: configmap "kube-root-ca.crt" not found Jul 2 00:20:02.021530 kubelet[2567]: E0702 00:20:02.021355 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/134b4bb4-f8fe-470e-a0ea-2550db4d6638-kube-api-access-w6542 podName:134b4bb4-f8fe-470e-a0ea-2550db4d6638 nodeName:}" failed. No retries permitted until 2024-07-02 00:20:02.521313398 +0000 UTC m=+11.864126384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w6542" (UniqueName: "kubernetes.io/projected/134b4bb4-f8fe-470e-a0ea-2550db4d6638-kube-api-access-w6542") pod "kube-proxy-rrg8b" (UID: "134b4bb4-f8fe-470e-a0ea-2550db4d6638") : configmap "kube-root-ca.crt" not found Jul 2 00:20:02.675077 kubelet[2567]: I0702 00:20:02.673676 2567 topology_manager.go:215] "Topology Admit Handler" podUID="fb0e37fa-54ec-4f84-bd4d-ec9c6fc87bb7" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-mq8lq" Jul 2 00:20:02.686277 systemd[1]: Created slice kubepods-besteffort-podfb0e37fa_54ec_4f84_bd4d_ec9c6fc87bb7.slice - libcontainer container kubepods-besteffort-podfb0e37fa_54ec_4f84_bd4d_ec9c6fc87bb7.slice. Jul 2 00:20:02.719568 kubelet[2567]: I0702 00:20:02.719474 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb0e37fa-54ec-4f84-bd4d-ec9c6fc87bb7-var-lib-calico\") pod \"tigera-operator-76c4974c85-mq8lq\" (UID: \"fb0e37fa-54ec-4f84-bd4d-ec9c6fc87bb7\") " pod="tigera-operator/tigera-operator-76c4974c85-mq8lq" Jul 2 00:20:02.719568 kubelet[2567]: I0702 00:20:02.719572 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b88j\" (UniqueName: \"kubernetes.io/projected/fb0e37fa-54ec-4f84-bd4d-ec9c6fc87bb7-kube-api-access-4b88j\") pod \"tigera-operator-76c4974c85-mq8lq\" (UID: \"fb0e37fa-54ec-4f84-bd4d-ec9c6fc87bb7\") " pod="tigera-operator/tigera-operator-76c4974c85-mq8lq" Jul 2 00:20:02.795636 kubelet[2567]: E0702 00:20:02.795565 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:02.796421 containerd[1464]: time="2024-07-02T00:20:02.796364818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rrg8b,Uid:134b4bb4-f8fe-470e-a0ea-2550db4d6638,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:02.825068 containerd[1464]: time="2024-07-02T00:20:02.824844361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:02.825068 containerd[1464]: time="2024-07-02T00:20:02.824953554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:02.825275 containerd[1464]: time="2024-07-02T00:20:02.824984192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:02.825275 containerd[1464]: time="2024-07-02T00:20:02.825096281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:02.856797 systemd[1]: Started cri-containerd-34fa07eab1d6e1b4ce49f01f4bc56410f7b8cbdc94a5120928598ca2f0cb9f2a.scope - libcontainer container 34fa07eab1d6e1b4ce49f01f4bc56410f7b8cbdc94a5120928598ca2f0cb9f2a. Jul 2 00:20:02.880869 containerd[1464]: time="2024-07-02T00:20:02.880818011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rrg8b,Uid:134b4bb4-f8fe-470e-a0ea-2550db4d6638,Namespace:kube-system,Attempt:0,} returns sandbox id \"34fa07eab1d6e1b4ce49f01f4bc56410f7b8cbdc94a5120928598ca2f0cb9f2a\"" Jul 2 00:20:02.881837 kubelet[2567]: E0702 00:20:02.881813 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:02.884268 containerd[1464]: time="2024-07-02T00:20:02.884225547Z" level=info msg="CreateContainer within sandbox \"34fa07eab1d6e1b4ce49f01f4bc56410f7b8cbdc94a5120928598ca2f0cb9f2a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:20:02.905801 containerd[1464]: time="2024-07-02T00:20:02.905730387Z" level=info msg="CreateContainer within sandbox \"34fa07eab1d6e1b4ce49f01f4bc56410f7b8cbdc94a5120928598ca2f0cb9f2a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c69073bd778638227acf153f983bc233e3d5b50435c8e7fb56f526278e3e2c4d\"" Jul 2 00:20:02.906229 containerd[1464]: time="2024-07-02T00:20:02.906153820Z" level=info msg="StartContainer for \"c69073bd778638227acf153f983bc233e3d5b50435c8e7fb56f526278e3e2c4d\"" Jul 2 00:20:02.939690 systemd[1]: Started cri-containerd-c69073bd778638227acf153f983bc233e3d5b50435c8e7fb56f526278e3e2c4d.scope - libcontainer container c69073bd778638227acf153f983bc233e3d5b50435c8e7fb56f526278e3e2c4d. Jul 2 00:20:02.972222 containerd[1464]: time="2024-07-02T00:20:02.972168159Z" level=info msg="StartContainer for \"c69073bd778638227acf153f983bc233e3d5b50435c8e7fb56f526278e3e2c4d\" returns successfully" Jul 2 00:20:02.990244 containerd[1464]: time="2024-07-02T00:20:02.990179142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-mq8lq,Uid:fb0e37fa-54ec-4f84-bd4d-ec9c6fc87bb7,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:20:03.019501 containerd[1464]: time="2024-07-02T00:20:03.019203927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:03.019977 containerd[1464]: time="2024-07-02T00:20:03.019936778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:03.020083 containerd[1464]: time="2024-07-02T00:20:03.019996259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:03.020083 containerd[1464]: time="2024-07-02T00:20:03.020021266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:03.041970 systemd[1]: Started cri-containerd-8e3eadd2c3f54f1839526d2f3786bb5c485967adbe1753fec2c8d294d6d01703.scope - libcontainer container 8e3eadd2c3f54f1839526d2f3786bb5c485967adbe1753fec2c8d294d6d01703. Jul 2 00:20:03.085416 containerd[1464]: time="2024-07-02T00:20:03.085341014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-mq8lq,Uid:fb0e37fa-54ec-4f84-bd4d-ec9c6fc87bb7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8e3eadd2c3f54f1839526d2f3786bb5c485967adbe1753fec2c8d294d6d01703\"" Jul 2 00:20:03.088912 containerd[1464]: time="2024-07-02T00:20:03.088871892Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:20:03.795652 kubelet[2567]: E0702 00:20:03.795615 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:04.397241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676186515.mount: Deactivated successfully. Jul 2 00:20:04.876152 containerd[1464]: time="2024-07-02T00:20:04.876065513Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:04.885058 containerd[1464]: time="2024-07-02T00:20:04.884998882Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Jul 2 00:20:04.886676 containerd[1464]: time="2024-07-02T00:20:04.886636407Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:04.889429 containerd[1464]: time="2024-07-02T00:20:04.889383558Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:04.890227 containerd[1464]: time="2024-07-02T00:20:04.890187683Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.801274655s" Jul 2 00:20:04.890284 containerd[1464]: time="2024-07-02T00:20:04.890223040Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:20:04.892212 containerd[1464]: time="2024-07-02T00:20:04.892172107Z" level=info msg="CreateContainer within sandbox \"8e3eadd2c3f54f1839526d2f3786bb5c485967adbe1753fec2c8d294d6d01703\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:20:04.908867 containerd[1464]: time="2024-07-02T00:20:04.908814693Z" level=info msg="CreateContainer within sandbox \"8e3eadd2c3f54f1839526d2f3786bb5c485967adbe1753fec2c8d294d6d01703\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4fa5a8de8f9ba125b7abce9bf20f4a688a5abfc4b4a2a67902540e4424622868\"" Jul 2 00:20:04.909351 containerd[1464]: time="2024-07-02T00:20:04.909312264Z" level=info msg="StartContainer for \"4fa5a8de8f9ba125b7abce9bf20f4a688a5abfc4b4a2a67902540e4424622868\"" Jul 2 00:20:04.943734 systemd[1]: Started cri-containerd-4fa5a8de8f9ba125b7abce9bf20f4a688a5abfc4b4a2a67902540e4424622868.scope - libcontainer container 4fa5a8de8f9ba125b7abce9bf20f4a688a5abfc4b4a2a67902540e4424622868. Jul 2 00:20:04.973414 containerd[1464]: time="2024-07-02T00:20:04.973358426Z" level=info msg="StartContainer for \"4fa5a8de8f9ba125b7abce9bf20f4a688a5abfc4b4a2a67902540e4424622868\" returns successfully" Jul 2 00:20:05.810211 kubelet[2567]: I0702 00:20:05.810177 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rrg8b" podStartSLOduration=4.810135419 podCreationTimestamp="2024-07-02 00:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:03.804072076 +0000 UTC m=+13.146885062" watchObservedRunningTime="2024-07-02 00:20:05.810135419 +0000 UTC m=+15.152948405" Jul 2 00:20:05.810820 kubelet[2567]: I0702 00:20:05.810278 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-mq8lq" podStartSLOduration=2.006601467 podCreationTimestamp="2024-07-02 00:20:02 +0000 UTC" firstStartedPulling="2024-07-02 00:20:03.086960144 +0000 UTC m=+12.429773130" lastFinishedPulling="2024-07-02 00:20:04.890621665 +0000 UTC m=+14.233434651" observedRunningTime="2024-07-02 00:20:05.810065699 +0000 UTC m=+15.152878685" watchObservedRunningTime="2024-07-02 00:20:05.810262988 +0000 UTC m=+15.153075984" Jul 2 00:20:08.048624 kubelet[2567]: I0702 00:20:08.048571 2567 topology_manager.go:215] "Topology Admit Handler" podUID="39de95e7-4a7d-45bf-8571-db360e39c787" podNamespace="calico-system" podName="calico-typha-f49864b6d-nzcp7" Jul 2 00:20:08.072574 systemd[1]: Created slice kubepods-besteffort-pod39de95e7_4a7d_45bf_8571_db360e39c787.slice - libcontainer container kubepods-besteffort-pod39de95e7_4a7d_45bf_8571_db360e39c787.slice. Jul 2 00:20:08.099179 kubelet[2567]: I0702 00:20:08.097397 2567 topology_manager.go:215] "Topology Admit Handler" podUID="9d3da6be-8464-4762-8909-360c937f51fe" podNamespace="calico-system" podName="calico-node-7478m" Jul 2 00:20:08.110265 systemd[1]: Created slice kubepods-besteffort-pod9d3da6be_8464_4762_8909_360c937f51fe.slice - libcontainer container kubepods-besteffort-pod9d3da6be_8464_4762_8909_360c937f51fe.slice. Jul 2 00:20:08.147118 kubelet[2567]: I0702 00:20:08.147049 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxggf\" (UniqueName: \"kubernetes.io/projected/39de95e7-4a7d-45bf-8571-db360e39c787-kube-api-access-zxggf\") pod \"calico-typha-f49864b6d-nzcp7\" (UID: \"39de95e7-4a7d-45bf-8571-db360e39c787\") " pod="calico-system/calico-typha-f49864b6d-nzcp7" Jul 2 00:20:08.147118 kubelet[2567]: I0702 00:20:08.147102 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39de95e7-4a7d-45bf-8571-db360e39c787-tigera-ca-bundle\") pod \"calico-typha-f49864b6d-nzcp7\" (UID: \"39de95e7-4a7d-45bf-8571-db360e39c787\") " pod="calico-system/calico-typha-f49864b6d-nzcp7" Jul 2 00:20:08.147118 kubelet[2567]: I0702 00:20:08.147122 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-lib-modules\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147331 kubelet[2567]: I0702 00:20:08.147188 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-flexvol-driver-host\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147331 kubelet[2567]: I0702 00:20:08.147298 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-policysync\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147390 kubelet[2567]: I0702 00:20:08.147364 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3da6be-8464-4762-8909-360c937f51fe-tigera-ca-bundle\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147418 kubelet[2567]: I0702 00:20:08.147402 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-net-dir\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147444 kubelet[2567]: I0702 00:20:08.147437 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-xtables-lock\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147511 kubelet[2567]: I0702 00:20:08.147472 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-bin-dir\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147511 kubelet[2567]: I0702 00:20:08.147511 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-log-dir\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147588 kubelet[2567]: I0702 00:20:08.147537 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-run-calico\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147588 kubelet[2567]: I0702 00:20:08.147561 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-lib-calico\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147588 kubelet[2567]: I0702 00:20:08.147585 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d3da6be-8464-4762-8909-360c937f51fe-node-certs\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147681 kubelet[2567]: I0702 00:20:08.147627 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxbg\" (UniqueName: \"kubernetes.io/projected/9d3da6be-8464-4762-8909-360c937f51fe-kube-api-access-nkxbg\") pod \"calico-node-7478m\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " pod="calico-system/calico-node-7478m" Jul 2 00:20:08.147681 kubelet[2567]: I0702 00:20:08.147668 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/39de95e7-4a7d-45bf-8571-db360e39c787-typha-certs\") pod \"calico-typha-f49864b6d-nzcp7\" (UID: \"39de95e7-4a7d-45bf-8571-db360e39c787\") " pod="calico-system/calico-typha-f49864b6d-nzcp7" Jul 2 00:20:08.220310 kubelet[2567]: I0702 00:20:08.219754 2567 topology_manager.go:215] "Topology Admit Handler" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" podNamespace="calico-system" podName="csi-node-driver-kch5r" Jul 2 00:20:08.220310 kubelet[2567]: E0702 00:20:08.220045 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:08.248249 kubelet[2567]: I0702 00:20:08.248204 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxj6k\" (UniqueName: \"kubernetes.io/projected/a4d63e15-e37d-4fdd-89ad-91a83354224d-kube-api-access-zxj6k\") pod \"csi-node-driver-kch5r\" (UID: \"a4d63e15-e37d-4fdd-89ad-91a83354224d\") " pod="calico-system/csi-node-driver-kch5r" Jul 2 00:20:08.248729 kubelet[2567]: I0702 00:20:08.248343 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a4d63e15-e37d-4fdd-89ad-91a83354224d-socket-dir\") pod \"csi-node-driver-kch5r\" (UID: \"a4d63e15-e37d-4fdd-89ad-91a83354224d\") " pod="calico-system/csi-node-driver-kch5r" Jul 2 00:20:08.248729 kubelet[2567]: I0702 00:20:08.248426 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a4d63e15-e37d-4fdd-89ad-91a83354224d-registration-dir\") pod \"csi-node-driver-kch5r\" (UID: \"a4d63e15-e37d-4fdd-89ad-91a83354224d\") " pod="calico-system/csi-node-driver-kch5r" Jul 2 00:20:08.248729 kubelet[2567]: I0702 00:20:08.248466 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a4d63e15-e37d-4fdd-89ad-91a83354224d-varrun\") pod \"csi-node-driver-kch5r\" (UID: \"a4d63e15-e37d-4fdd-89ad-91a83354224d\") " pod="calico-system/csi-node-driver-kch5r" Jul 2 00:20:08.248729 kubelet[2567]: I0702 00:20:08.248533 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4d63e15-e37d-4fdd-89ad-91a83354224d-kubelet-dir\") pod \"csi-node-driver-kch5r\" (UID: \"a4d63e15-e37d-4fdd-89ad-91a83354224d\") " pod="calico-system/csi-node-driver-kch5r" Jul 2 00:20:08.254850 kubelet[2567]: E0702 00:20:08.254321 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.254850 kubelet[2567]: W0702 00:20:08.254348 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.254850 kubelet[2567]: E0702 00:20:08.254382 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.257194 kubelet[2567]: E0702 00:20:08.257178 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.257280 kubelet[2567]: W0702 00:20:08.257267 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.257340 kubelet[2567]: E0702 00:20:08.257330 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.274561 kubelet[2567]: E0702 00:20:08.270085 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.274561 kubelet[2567]: W0702 00:20:08.270108 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.274561 kubelet[2567]: E0702 00:20:08.270145 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.274561 kubelet[2567]: E0702 00:20:08.271820 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.274561 kubelet[2567]: W0702 00:20:08.271830 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.274561 kubelet[2567]: E0702 00:20:08.271853 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.349689 kubelet[2567]: E0702 00:20:08.349514 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.349689 kubelet[2567]: W0702 00:20:08.349546 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.349689 kubelet[2567]: E0702 00:20:08.349587 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.351158 kubelet[2567]: E0702 00:20:08.351128 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.351158 kubelet[2567]: W0702 00:20:08.351143 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.351158 kubelet[2567]: E0702 00:20:08.351161 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.351629 kubelet[2567]: E0702 00:20:08.351602 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.351629 kubelet[2567]: W0702 00:20:08.351617 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.351715 kubelet[2567]: E0702 00:20:08.351645 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.351921 kubelet[2567]: E0702 00:20:08.351907 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.351921 kubelet[2567]: W0702 00:20:08.351919 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.352030 kubelet[2567]: E0702 00:20:08.351943 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.352381 kubelet[2567]: E0702 00:20:08.352151 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.352381 kubelet[2567]: W0702 00:20:08.352164 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.352381 kubelet[2567]: E0702 00:20:08.352205 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.353241 kubelet[2567]: E0702 00:20:08.353215 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.353241 kubelet[2567]: W0702 00:20:08.353227 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.353241 kubelet[2567]: E0702 00:20:08.353243 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.353608 kubelet[2567]: E0702 00:20:08.353583 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.353608 kubelet[2567]: W0702 00:20:08.353597 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.353694 kubelet[2567]: E0702 00:20:08.353614 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.354133 kubelet[2567]: E0702 00:20:08.353811 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.354133 kubelet[2567]: W0702 00:20:08.353820 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.354133 kubelet[2567]: E0702 00:20:08.353831 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.354133 kubelet[2567]: E0702 00:20:08.354120 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.354133 kubelet[2567]: W0702 00:20:08.354130 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.354315 kubelet[2567]: E0702 00:20:08.354151 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.354431 kubelet[2567]: E0702 00:20:08.354411 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.354431 kubelet[2567]: W0702 00:20:08.354424 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.354545 kubelet[2567]: E0702 00:20:08.354503 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.354752 kubelet[2567]: E0702 00:20:08.354715 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.354752 kubelet[2567]: W0702 00:20:08.354729 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.354836 kubelet[2567]: E0702 00:20:08.354825 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.355302 kubelet[2567]: E0702 00:20:08.355283 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.355302 kubelet[2567]: W0702 00:20:08.355300 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.355395 kubelet[2567]: E0702 00:20:08.355344 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.355578 kubelet[2567]: E0702 00:20:08.355559 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.355578 kubelet[2567]: W0702 00:20:08.355574 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.355649 kubelet[2567]: E0702 00:20:08.355593 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.355869 kubelet[2567]: E0702 00:20:08.355803 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.355869 kubelet[2567]: W0702 00:20:08.355815 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.355869 kubelet[2567]: E0702 00:20:08.355833 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.356077 kubelet[2567]: E0702 00:20:08.356053 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.356077 kubelet[2567]: W0702 00:20:08.356065 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.356152 kubelet[2567]: E0702 00:20:08.356094 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.356381 kubelet[2567]: E0702 00:20:08.356366 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.356381 kubelet[2567]: W0702 00:20:08.356378 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.356455 kubelet[2567]: E0702 00:20:08.356397 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.356675 kubelet[2567]: E0702 00:20:08.356659 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.356675 kubelet[2567]: W0702 00:20:08.356671 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.356761 kubelet[2567]: E0702 00:20:08.356688 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.356967 kubelet[2567]: E0702 00:20:08.356943 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.356967 kubelet[2567]: W0702 00:20:08.356957 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.357073 kubelet[2567]: E0702 00:20:08.357022 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.357247 kubelet[2567]: E0702 00:20:08.357224 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.357247 kubelet[2567]: W0702 00:20:08.357238 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.357320 kubelet[2567]: E0702 00:20:08.357261 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.357515 kubelet[2567]: E0702 00:20:08.357476 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.357560 kubelet[2567]: W0702 00:20:08.357518 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.357560 kubelet[2567]: E0702 00:20:08.357533 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.357809 kubelet[2567]: E0702 00:20:08.357781 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.357809 kubelet[2567]: W0702 00:20:08.357796 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.357809 kubelet[2567]: E0702 00:20:08.357821 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.358250 kubelet[2567]: E0702 00:20:08.358217 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.358319 kubelet[2567]: W0702 00:20:08.358267 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.358319 kubelet[2567]: E0702 00:20:08.358310 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.358716 kubelet[2567]: E0702 00:20:08.358699 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.358716 kubelet[2567]: W0702 00:20:08.358710 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.358808 kubelet[2567]: E0702 00:20:08.358725 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.358977 kubelet[2567]: E0702 00:20:08.358961 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.358977 kubelet[2567]: W0702 00:20:08.358971 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.359061 kubelet[2567]: E0702 00:20:08.358983 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.359278 kubelet[2567]: E0702 00:20:08.359262 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.359278 kubelet[2567]: W0702 00:20:08.359273 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.359360 kubelet[2567]: E0702 00:20:08.359285 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.369608 kubelet[2567]: E0702 00:20:08.369475 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:08.369608 kubelet[2567]: W0702 00:20:08.369514 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:08.369608 kubelet[2567]: E0702 00:20:08.369545 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:08.378623 kubelet[2567]: E0702 00:20:08.378592 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:08.381309 containerd[1464]: time="2024-07-02T00:20:08.380234240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f49864b6d-nzcp7,Uid:39de95e7-4a7d-45bf-8571-db360e39c787,Namespace:calico-system,Attempt:0,}" Jul 2 00:20:08.415400 kubelet[2567]: E0702 00:20:08.415355 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:08.417961 containerd[1464]: time="2024-07-02T00:20:08.417693022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7478m,Uid:9d3da6be-8464-4762-8909-360c937f51fe,Namespace:calico-system,Attempt:0,}" Jul 2 00:20:08.987048 containerd[1464]: time="2024-07-02T00:20:08.986897226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:08.987048 containerd[1464]: time="2024-07-02T00:20:08.986974972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:08.987616 containerd[1464]: time="2024-07-02T00:20:08.987043530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:08.987616 containerd[1464]: time="2024-07-02T00:20:08.987071643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:09.013546 containerd[1464]: time="2024-07-02T00:20:09.013346610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:09.013546 containerd[1464]: time="2024-07-02T00:20:09.013449432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:09.013546 containerd[1464]: time="2024-07-02T00:20:09.013472144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:09.013546 containerd[1464]: time="2024-07-02T00:20:09.013516027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:09.019529 systemd[1]: Started cri-containerd-2780e83a91324a70eb5cc64a41decb0536c33ba7f49b96011eeaccbccc8c0316.scope - libcontainer container 2780e83a91324a70eb5cc64a41decb0536c33ba7f49b96011eeaccbccc8c0316. Jul 2 00:20:09.048937 systemd[1]: Started cri-containerd-b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68.scope - libcontainer container b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68. Jul 2 00:20:09.075306 containerd[1464]: time="2024-07-02T00:20:09.075254856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f49864b6d-nzcp7,Uid:39de95e7-4a7d-45bf-8571-db360e39c787,Namespace:calico-system,Attempt:0,} returns sandbox id \"2780e83a91324a70eb5cc64a41decb0536c33ba7f49b96011eeaccbccc8c0316\"" Jul 2 00:20:09.076346 kubelet[2567]: E0702 00:20:09.076227 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:09.079874 containerd[1464]: time="2024-07-02T00:20:09.079748578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:20:09.099388 containerd[1464]: time="2024-07-02T00:20:09.099273419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7478m,Uid:9d3da6be-8464-4762-8909-360c937f51fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\"" Jul 2 00:20:09.100643 kubelet[2567]: E0702 00:20:09.100425 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:09.760511 kubelet[2567]: E0702 00:20:09.759368 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:11.759156 kubelet[2567]: E0702 00:20:11.759106 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:13.758968 kubelet[2567]: E0702 00:20:13.758902 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:14.031903 containerd[1464]: time="2024-07-02T00:20:14.031828230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:14.032794 containerd[1464]: time="2024-07-02T00:20:14.032738955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:20:14.034270 containerd[1464]: time="2024-07-02T00:20:14.034233364Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:14.036993 containerd[1464]: time="2024-07-02T00:20:14.036922138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:14.037572 containerd[1464]: time="2024-07-02T00:20:14.037524426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.957717739s" Jul 2 00:20:14.037620 containerd[1464]: time="2024-07-02T00:20:14.037569441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:20:14.038507 containerd[1464]: time="2024-07-02T00:20:14.038244745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:20:14.052953 containerd[1464]: time="2024-07-02T00:20:14.052901310Z" level=info msg="CreateContainer within sandbox \"2780e83a91324a70eb5cc64a41decb0536c33ba7f49b96011eeaccbccc8c0316\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:20:14.081115 containerd[1464]: time="2024-07-02T00:20:14.081022809Z" level=info msg="CreateContainer within sandbox \"2780e83a91324a70eb5cc64a41decb0536c33ba7f49b96011eeaccbccc8c0316\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"98e8f371a9b95942cc6f886839376e4d93d7ca1cf93bbdb71bacb165b97b5ac3\"" Jul 2 00:20:14.083314 containerd[1464]: time="2024-07-02T00:20:14.081768786Z" level=info msg="StartContainer for \"98e8f371a9b95942cc6f886839376e4d93d7ca1cf93bbdb71bacb165b97b5ac3\"" Jul 2 00:20:14.122801 systemd[1]: Started cri-containerd-98e8f371a9b95942cc6f886839376e4d93d7ca1cf93bbdb71bacb165b97b5ac3.scope - libcontainer container 98e8f371a9b95942cc6f886839376e4d93d7ca1cf93bbdb71bacb165b97b5ac3. Jul 2 00:20:14.172376 containerd[1464]: time="2024-07-02T00:20:14.172309326Z" level=info msg="StartContainer for \"98e8f371a9b95942cc6f886839376e4d93d7ca1cf93bbdb71bacb165b97b5ac3\" returns successfully" Jul 2 00:20:14.823513 kubelet[2567]: E0702 00:20:14.821346 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:14.832608 kubelet[2567]: I0702 00:20:14.832429 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-f49864b6d-nzcp7" podStartSLOduration=1.871656022 podCreationTimestamp="2024-07-02 00:20:08 +0000 UTC" firstStartedPulling="2024-07-02 00:20:09.077273304 +0000 UTC m=+18.420086301" lastFinishedPulling="2024-07-02 00:20:14.037978095 +0000 UTC m=+23.380791081" observedRunningTime="2024-07-02 00:20:14.8320499 +0000 UTC m=+24.174862886" watchObservedRunningTime="2024-07-02 00:20:14.832360802 +0000 UTC m=+24.175173788" Jul 2 00:20:14.889127 kubelet[2567]: E0702 00:20:14.889083 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.889127 kubelet[2567]: W0702 00:20:14.889113 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.889310 kubelet[2567]: E0702 00:20:14.889145 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.889499 kubelet[2567]: E0702 00:20:14.889469 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.889544 kubelet[2567]: W0702 00:20:14.889478 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.889544 kubelet[2567]: E0702 00:20:14.889540 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.889753 kubelet[2567]: E0702 00:20:14.889740 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.889753 kubelet[2567]: W0702 00:20:14.889751 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.889824 kubelet[2567]: E0702 00:20:14.889765 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.890001 kubelet[2567]: E0702 00:20:14.889979 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.890001 kubelet[2567]: W0702 00:20:14.889990 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.890001 kubelet[2567]: E0702 00:20:14.890003 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.890242 kubelet[2567]: E0702 00:20:14.890227 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.890242 kubelet[2567]: W0702 00:20:14.890238 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.890321 kubelet[2567]: E0702 00:20:14.890252 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.890507 kubelet[2567]: E0702 00:20:14.890476 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.890548 kubelet[2567]: W0702 00:20:14.890506 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.890548 kubelet[2567]: E0702 00:20:14.890520 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.890758 kubelet[2567]: E0702 00:20:14.890746 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.890758 kubelet[2567]: W0702 00:20:14.890756 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.890895 kubelet[2567]: E0702 00:20:14.890771 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.890996 kubelet[2567]: E0702 00:20:14.890984 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.890996 kubelet[2567]: W0702 00:20:14.890995 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.891058 kubelet[2567]: E0702 00:20:14.891008 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.891234 kubelet[2567]: E0702 00:20:14.891218 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.891234 kubelet[2567]: W0702 00:20:14.891228 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.891327 kubelet[2567]: E0702 00:20:14.891240 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.891494 kubelet[2567]: E0702 00:20:14.891450 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.891494 kubelet[2567]: W0702 00:20:14.891465 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.891704 kubelet[2567]: E0702 00:20:14.891518 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.891783 kubelet[2567]: E0702 00:20:14.891767 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.891819 kubelet[2567]: W0702 00:20:14.891779 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.891819 kubelet[2567]: E0702 00:20:14.891804 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.892021 kubelet[2567]: E0702 00:20:14.892006 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.892021 kubelet[2567]: W0702 00:20:14.892017 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.892088 kubelet[2567]: E0702 00:20:14.892028 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.892242 kubelet[2567]: E0702 00:20:14.892223 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.892242 kubelet[2567]: W0702 00:20:14.892236 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.892242 kubelet[2567]: E0702 00:20:14.892247 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.892517 kubelet[2567]: E0702 00:20:14.892473 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.892517 kubelet[2567]: W0702 00:20:14.892500 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.892517 kubelet[2567]: E0702 00:20:14.892515 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.892792 kubelet[2567]: E0702 00:20:14.892778 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.892792 kubelet[2567]: W0702 00:20:14.892790 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.892870 kubelet[2567]: E0702 00:20:14.892806 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.899358 kubelet[2567]: E0702 00:20:14.899330 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.899358 kubelet[2567]: W0702 00:20:14.899350 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.899358 kubelet[2567]: E0702 00:20:14.899372 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.899658 kubelet[2567]: E0702 00:20:14.899636 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.899658 kubelet[2567]: W0702 00:20:14.899650 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.899751 kubelet[2567]: E0702 00:20:14.899673 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.900101 kubelet[2567]: E0702 00:20:14.900063 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.900101 kubelet[2567]: W0702 00:20:14.900092 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.900195 kubelet[2567]: E0702 00:20:14.900134 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.900384 kubelet[2567]: E0702 00:20:14.900362 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.900384 kubelet[2567]: W0702 00:20:14.900374 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.900500 kubelet[2567]: E0702 00:20:14.900392 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.900623 kubelet[2567]: E0702 00:20:14.900600 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.900623 kubelet[2567]: W0702 00:20:14.900613 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.900727 kubelet[2567]: E0702 00:20:14.900630 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.900894 kubelet[2567]: E0702 00:20:14.900873 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.900894 kubelet[2567]: W0702 00:20:14.900884 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.900986 kubelet[2567]: E0702 00:20:14.900902 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.901108 kubelet[2567]: E0702 00:20:14.901086 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.901108 kubelet[2567]: W0702 00:20:14.901100 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.901190 kubelet[2567]: E0702 00:20:14.901119 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.901401 kubelet[2567]: E0702 00:20:14.901383 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.901401 kubelet[2567]: W0702 00:20:14.901395 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.901541 kubelet[2567]: E0702 00:20:14.901432 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.901616 kubelet[2567]: E0702 00:20:14.901602 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.901616 kubelet[2567]: W0702 00:20:14.901612 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.901701 kubelet[2567]: E0702 00:20:14.901642 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.901816 kubelet[2567]: E0702 00:20:14.901802 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.901816 kubelet[2567]: W0702 00:20:14.901812 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.901886 kubelet[2567]: E0702 00:20:14.901838 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.902071 kubelet[2567]: E0702 00:20:14.902057 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.902071 kubelet[2567]: W0702 00:20:14.902065 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.902149 kubelet[2567]: E0702 00:20:14.902081 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.902268 kubelet[2567]: E0702 00:20:14.902252 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.902268 kubelet[2567]: W0702 00:20:14.902262 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.902343 kubelet[2567]: E0702 00:20:14.902288 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.902518 kubelet[2567]: E0702 00:20:14.902505 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.902518 kubelet[2567]: W0702 00:20:14.902514 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.902602 kubelet[2567]: E0702 00:20:14.902527 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.902836 kubelet[2567]: E0702 00:20:14.902820 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.902836 kubelet[2567]: W0702 00:20:14.902832 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.902916 kubelet[2567]: E0702 00:20:14.902851 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.903129 kubelet[2567]: E0702 00:20:14.903114 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.903129 kubelet[2567]: W0702 00:20:14.903123 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.903212 kubelet[2567]: E0702 00:20:14.903139 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.903347 kubelet[2567]: E0702 00:20:14.903334 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.903347 kubelet[2567]: W0702 00:20:14.903342 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.903427 kubelet[2567]: E0702 00:20:14.903357 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.903632 kubelet[2567]: E0702 00:20:14.903617 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.903632 kubelet[2567]: W0702 00:20:14.903627 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.903732 kubelet[2567]: E0702 00:20:14.903642 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:14.903846 kubelet[2567]: E0702 00:20:14.903831 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:20:14.903846 kubelet[2567]: W0702 00:20:14.903841 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:20:14.903918 kubelet[2567]: E0702 00:20:14.903852 2567 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:20:15.528617 containerd[1464]: time="2024-07-02T00:20:15.528512702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:15.529571 containerd[1464]: time="2024-07-02T00:20:15.529470596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:20:15.531070 containerd[1464]: time="2024-07-02T00:20:15.531013215Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:15.533571 containerd[1464]: time="2024-07-02T00:20:15.533504991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:15.534337 containerd[1464]: time="2024-07-02T00:20:15.534293347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.496014359s" Jul 2 00:20:15.534337 containerd[1464]: time="2024-07-02T00:20:15.534329906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:20:15.536372 containerd[1464]: time="2024-07-02T00:20:15.536279967Z" level=info msg="CreateContainer within sandbox \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:20:15.558130 containerd[1464]: time="2024-07-02T00:20:15.558065634Z" level=info msg="CreateContainer within sandbox \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\"" Jul 2 00:20:15.558844 containerd[1464]: time="2024-07-02T00:20:15.558782948Z" level=info msg="StartContainer for \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\"" Jul 2 00:20:15.605807 systemd[1]: Started cri-containerd-7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb.scope - libcontainer container 7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb. Jul 2 00:20:15.656307 systemd[1]: cri-containerd-7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb.scope: Deactivated successfully. Jul 2 00:20:15.759140 kubelet[2567]: E0702 00:20:15.758733 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:15.952039 containerd[1464]: time="2024-07-02T00:20:15.951016626Z" level=info msg="StartContainer for \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\" returns successfully" Jul 2 00:20:15.956330 kubelet[2567]: E0702 00:20:15.956279 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:15.962903 containerd[1464]: time="2024-07-02T00:20:15.962343286Z" level=info msg="StopContainer for \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\" with timeout 5 (s)" Jul 2 00:20:15.964520 containerd[1464]: time="2024-07-02T00:20:15.963247790Z" level=info msg="Stop container \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\" with signal terminated" Jul 2 00:20:15.995556 containerd[1464]: time="2024-07-02T00:20:15.995418915Z" level=info msg="shim disconnected" id=7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb namespace=k8s.io Jul 2 00:20:15.995556 containerd[1464]: time="2024-07-02T00:20:15.995536495Z" level=warning msg="cleaning up after shim disconnected" id=7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb namespace=k8s.io Jul 2 00:20:15.995556 containerd[1464]: time="2024-07-02T00:20:15.995550542Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:16.017202 containerd[1464]: time="2024-07-02T00:20:16.017128420Z" level=info msg="StopContainer for \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\" returns successfully" Jul 2 00:20:16.018088 containerd[1464]: time="2024-07-02T00:20:16.018050226Z" level=info msg="StopPodSandbox for \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\"" Jul 2 00:20:16.018280 containerd[1464]: time="2024-07-02T00:20:16.018089309Z" level=info msg="Container to stop \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:20:16.025910 systemd[1]: cri-containerd-b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68.scope: Deactivated successfully. Jul 2 00:20:16.044025 systemd[1]: run-containerd-runc-k8s.io-7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb-runc.pvanCo.mount: Deactivated successfully. Jul 2 00:20:16.044136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb-rootfs.mount: Deactivated successfully. Jul 2 00:20:16.044215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68-shm.mount: Deactivated successfully. Jul 2 00:20:16.051394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68-rootfs.mount: Deactivated successfully. Jul 2 00:20:16.056358 containerd[1464]: time="2024-07-02T00:20:16.056259291Z" level=info msg="shim disconnected" id=b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68 namespace=k8s.io Jul 2 00:20:16.056358 containerd[1464]: time="2024-07-02T00:20:16.056328130Z" level=warning msg="cleaning up after shim disconnected" id=b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68 namespace=k8s.io Jul 2 00:20:16.056358 containerd[1464]: time="2024-07-02T00:20:16.056340253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:16.071956 containerd[1464]: time="2024-07-02T00:20:16.071895813Z" level=info msg="TearDown network for sandbox \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" successfully" Jul 2 00:20:16.071956 containerd[1464]: time="2024-07-02T00:20:16.071942902Z" level=info msg="StopPodSandbox for \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" returns successfully" Jul 2 00:20:16.107843 kubelet[2567]: I0702 00:20:16.107774 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-xtables-lock\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.107843 kubelet[2567]: I0702 00:20:16.107845 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkxbg\" (UniqueName: \"kubernetes.io/projected/9d3da6be-8464-4762-8909-360c937f51fe-kube-api-access-nkxbg\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.107843 kubelet[2567]: I0702 00:20:16.107867 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-bin-dir\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108121 kubelet[2567]: I0702 00:20:16.107888 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-policysync\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108121 kubelet[2567]: I0702 00:20:16.107909 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d3da6be-8464-4762-8909-360c937f51fe-node-certs\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108121 kubelet[2567]: I0702 00:20:16.107929 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3da6be-8464-4762-8909-360c937f51fe-tigera-ca-bundle\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108121 kubelet[2567]: I0702 00:20:16.107946 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-run-calico\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108121 kubelet[2567]: I0702 00:20:16.107932 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108121 kubelet[2567]: I0702 00:20:16.107999 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108324 kubelet[2567]: I0702 00:20:16.107963 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-log-dir\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108324 kubelet[2567]: I0702 00:20:16.108031 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108324 kubelet[2567]: I0702 00:20:16.108049 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-policysync" (OuterVolumeSpecName: "policysync") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108324 kubelet[2567]: I0702 00:20:16.108061 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-lib-modules\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108324 kubelet[2567]: I0702 00:20:16.108091 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-net-dir\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108324 kubelet[2567]: I0702 00:20:16.108116 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-flexvol-driver-host\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108518 kubelet[2567]: I0702 00:20:16.108143 2567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-lib-calico\") pod \"9d3da6be-8464-4762-8909-360c937f51fe\" (UID: \"9d3da6be-8464-4762-8909-360c937f51fe\") " Jul 2 00:20:16.108518 kubelet[2567]: I0702 00:20:16.108210 2567 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.108518 kubelet[2567]: I0702 00:20:16.108226 2567 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-policysync\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.108518 kubelet[2567]: I0702 00:20:16.108240 2567 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.108518 kubelet[2567]: I0702 00:20:16.108264 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108518 kubelet[2567]: I0702 00:20:16.108288 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108746 kubelet[2567]: I0702 00:20:16.108310 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108746 kubelet[2567]: I0702 00:20:16.108333 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.108746 kubelet[2567]: I0702 00:20:16.108466 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:20:16.109520 kubelet[2567]: I0702 00:20:16.108993 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3da6be-8464-4762-8909-360c937f51fe-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:20:16.112103 kubelet[2567]: I0702 00:20:16.112056 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3da6be-8464-4762-8909-360c937f51fe-kube-api-access-nkxbg" (OuterVolumeSpecName: "kube-api-access-nkxbg") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "kube-api-access-nkxbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:20:16.112175 kubelet[2567]: I0702 00:20:16.112106 2567 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3da6be-8464-4762-8909-360c937f51fe-node-certs" (OuterVolumeSpecName: "node-certs") pod "9d3da6be-8464-4762-8909-360c937f51fe" (UID: "9d3da6be-8464-4762-8909-360c937f51fe"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:20:16.114042 systemd[1]: var-lib-kubelet-pods-9d3da6be\x2d8464\x2d4762\x2d8909\x2d360c937f51fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnkxbg.mount: Deactivated successfully. Jul 2 00:20:16.114186 systemd[1]: var-lib-kubelet-pods-9d3da6be\x2d8464\x2d4762\x2d8909\x2d360c937f51fe-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208864 2567 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nkxbg\" (UniqueName: \"kubernetes.io/projected/9d3da6be-8464-4762-8909-360c937f51fe-kube-api-access-nkxbg\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208904 2567 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208915 2567 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d3da6be-8464-4762-8909-360c937f51fe-node-certs\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208924 2567 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3da6be-8464-4762-8909-360c937f51fe-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208947 2567 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208958 2567 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208968 2567 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209026 kubelet[2567]: I0702 00:20:16.208981 2567 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.209407 kubelet[2567]: I0702 00:20:16.208990 2567 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d3da6be-8464-4762-8909-360c937f51fe-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 00:20:16.768610 systemd[1]: Removed slice kubepods-besteffort-pod9d3da6be_8464_4762_8909_360c937f51fe.slice - libcontainer container kubepods-besteffort-pod9d3da6be_8464_4762_8909_360c937f51fe.slice. Jul 2 00:20:16.963457 kubelet[2567]: E0702 00:20:16.963419 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:16.964707 kubelet[2567]: I0702 00:20:16.964675 2567 scope.go:117] "RemoveContainer" containerID="7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb" Jul 2 00:20:16.967046 containerd[1464]: time="2024-07-02T00:20:16.966995341Z" level=info msg="RemoveContainer for \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\"" Jul 2 00:20:16.972226 containerd[1464]: time="2024-07-02T00:20:16.972182514Z" level=info msg="RemoveContainer for \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\" returns successfully" Jul 2 00:20:16.973955 kubelet[2567]: I0702 00:20:16.973890 2567 scope.go:117] "RemoveContainer" containerID="7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb" Jul 2 00:20:16.976873 containerd[1464]: time="2024-07-02T00:20:16.975813504Z" level=error msg="ContainerStatus for \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\": not found" Jul 2 00:20:16.976961 kubelet[2567]: E0702 00:20:16.976082 2567 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\": not found" containerID="7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb" Jul 2 00:20:16.976961 kubelet[2567]: I0702 00:20:16.976164 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb"} err="failed to get container status \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cfd9a99969adc9d4ea9b0ac0180463ce34690cef14081a851c003abcb3951cb\": not found" Jul 2 00:20:16.997824 kubelet[2567]: I0702 00:20:16.997768 2567 topology_manager.go:215] "Topology Admit Handler" podUID="1bb28cfe-1c97-4580-a89f-005de4c56bbb" podNamespace="calico-system" podName="calico-node-9mr2t" Jul 2 00:20:17.000771 kubelet[2567]: E0702 00:20:17.000247 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d3da6be-8464-4762-8909-360c937f51fe" containerName="flexvol-driver" Jul 2 00:20:17.000771 kubelet[2567]: I0702 00:20:17.000354 2567 memory_manager.go:346] "RemoveStaleState removing state" podUID="9d3da6be-8464-4762-8909-360c937f51fe" containerName="flexvol-driver" Jul 2 00:20:17.011448 systemd[1]: Created slice kubepods-besteffort-pod1bb28cfe_1c97_4580_a89f_005de4c56bbb.slice - libcontainer container kubepods-besteffort-pod1bb28cfe_1c97_4580_a89f_005de4c56bbb.slice. Jul 2 00:20:17.013922 kubelet[2567]: I0702 00:20:17.013901 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bb28cfe-1c97-4580-a89f-005de4c56bbb-tigera-ca-bundle\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014035 kubelet[2567]: I0702 00:20:17.014024 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-var-lib-calico\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014123 kubelet[2567]: I0702 00:20:17.014096 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-cni-log-dir\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014278 kubelet[2567]: I0702 00:20:17.014135 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-var-run-calico\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014278 kubelet[2567]: I0702 00:20:17.014166 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-lib-modules\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014278 kubelet[2567]: I0702 00:20:17.014185 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-cni-bin-dir\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014278 kubelet[2567]: I0702 00:20:17.014202 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-cni-net-dir\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014278 kubelet[2567]: I0702 00:20:17.014221 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-flexvol-driver-host\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014451 kubelet[2567]: I0702 00:20:17.014250 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-xtables-lock\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014451 kubelet[2567]: I0702 00:20:17.014269 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1bb28cfe-1c97-4580-a89f-005de4c56bbb-node-certs\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014451 kubelet[2567]: I0702 00:20:17.014288 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1bb28cfe-1c97-4580-a89f-005de4c56bbb-policysync\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.014451 kubelet[2567]: I0702 00:20:17.014310 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xfnq\" (UniqueName: \"kubernetes.io/projected/1bb28cfe-1c97-4580-a89f-005de4c56bbb-kube-api-access-7xfnq\") pod \"calico-node-9mr2t\" (UID: \"1bb28cfe-1c97-4580-a89f-005de4c56bbb\") " pod="calico-system/calico-node-9mr2t" Jul 2 00:20:17.315852 kubelet[2567]: E0702 00:20:17.315797 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:17.317313 containerd[1464]: time="2024-07-02T00:20:17.316821732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mr2t,Uid:1bb28cfe-1c97-4580-a89f-005de4c56bbb,Namespace:calico-system,Attempt:0,}" Jul 2 00:20:17.345097 containerd[1464]: time="2024-07-02T00:20:17.344584034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:17.345097 containerd[1464]: time="2024-07-02T00:20:17.344707185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:17.345097 containerd[1464]: time="2024-07-02T00:20:17.344827561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:17.345097 containerd[1464]: time="2024-07-02T00:20:17.344841918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:17.379912 systemd[1]: Started cri-containerd-0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79.scope - libcontainer container 0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79. Jul 2 00:20:17.413055 containerd[1464]: time="2024-07-02T00:20:17.412986348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mr2t,Uid:1bb28cfe-1c97-4580-a89f-005de4c56bbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79\"" Jul 2 00:20:17.413954 kubelet[2567]: E0702 00:20:17.413924 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:17.416402 containerd[1464]: time="2024-07-02T00:20:17.416349667Z" level=info msg="CreateContainer within sandbox \"0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:20:17.446876 containerd[1464]: time="2024-07-02T00:20:17.446814552Z" level=info msg="CreateContainer within sandbox \"0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb\"" Jul 2 00:20:17.447714 containerd[1464]: time="2024-07-02T00:20:17.447669793Z" level=info msg="StartContainer for \"768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb\"" Jul 2 00:20:17.482778 systemd[1]: Started cri-containerd-768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb.scope - libcontainer container 768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb. Jul 2 00:20:17.528013 containerd[1464]: time="2024-07-02T00:20:17.527948940Z" level=info msg="StartContainer for \"768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb\" returns successfully" Jul 2 00:20:17.541842 systemd[1]: cri-containerd-768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb.scope: Deactivated successfully. Jul 2 00:20:17.758930 kubelet[2567]: E0702 00:20:17.758729 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:17.841979 containerd[1464]: time="2024-07-02T00:20:17.841898595Z" level=info msg="shim disconnected" id=768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb namespace=k8s.io Jul 2 00:20:17.841979 containerd[1464]: time="2024-07-02T00:20:17.841955040Z" level=warning msg="cleaning up after shim disconnected" id=768ac097ac3fb0b095452b73fbfd630c6260c11d280af6b31419b2b901139dcb namespace=k8s.io Jul 2 00:20:17.841979 containerd[1464]: time="2024-07-02T00:20:17.841966111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:17.966260 kubelet[2567]: E0702 00:20:17.966210 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:17.966932 containerd[1464]: time="2024-07-02T00:20:17.966896781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:20:18.121881 systemd[1]: run-containerd-runc-k8s.io-0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79-runc.BESzks.mount: Deactivated successfully. Jul 2 00:20:18.763018 kubelet[2567]: I0702 00:20:18.762971 2567 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9d3da6be-8464-4762-8909-360c937f51fe" path="/var/lib/kubelet/pods/9d3da6be-8464-4762-8909-360c937f51fe/volumes" Jul 2 00:20:19.758570 kubelet[2567]: E0702 00:20:19.758460 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:21.759303 kubelet[2567]: E0702 00:20:21.759199 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:23.583077 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:59554.service - OpenSSH per-connection server daemon (10.0.0.1:59554). Jul 2 00:20:23.649470 sshd[3384]: Accepted publickey for core from 10.0.0.1 port 59554 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:23.651671 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:23.670320 systemd-logind[1443]: New session 8 of user core. Jul 2 00:20:23.675651 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:20:23.770592 kubelet[2567]: E0702 00:20:23.770533 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:23.887401 sshd[3384]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:23.894301 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:59554.service: Deactivated successfully. Jul 2 00:20:23.897099 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:20:23.897975 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:20:23.900040 systemd-logind[1443]: Removed session 8. Jul 2 00:20:25.759117 kubelet[2567]: E0702 00:20:25.759068 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:27.253461 containerd[1464]: time="2024-07-02T00:20:27.253398628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:27.257279 containerd[1464]: time="2024-07-02T00:20:27.257184260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:20:27.287964 containerd[1464]: time="2024-07-02T00:20:27.287883596Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:27.297296 containerd[1464]: time="2024-07-02T00:20:27.297241306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:27.297989 containerd[1464]: time="2024-07-02T00:20:27.297961627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 9.331026845s" Jul 2 00:20:27.298069 containerd[1464]: time="2024-07-02T00:20:27.297989969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:20:27.301766 containerd[1464]: time="2024-07-02T00:20:27.301718826Z" level=info msg="CreateContainer within sandbox \"0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:20:27.336434 containerd[1464]: time="2024-07-02T00:20:27.336340919Z" level=info msg="CreateContainer within sandbox \"0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838\"" Jul 2 00:20:27.337108 containerd[1464]: time="2024-07-02T00:20:27.337078291Z" level=info msg="StartContainer for \"0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838\"" Jul 2 00:20:27.376901 systemd[1]: Started cri-containerd-0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838.scope - libcontainer container 0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838. Jul 2 00:20:27.758777 kubelet[2567]: E0702 00:20:27.758702 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:27.902567 containerd[1464]: time="2024-07-02T00:20:27.902453696Z" level=info msg="StartContainer for \"0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838\" returns successfully" Jul 2 00:20:27.999362 kubelet[2567]: E0702 00:20:27.999300 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:28.899051 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:42486.service - OpenSSH per-connection server daemon (10.0.0.1:42486). Jul 2 00:20:29.079192 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 42486 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:29.081266 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:29.086924 systemd-logind[1443]: New session 9 of user core. Jul 2 00:20:29.093745 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:20:29.245725 sshd[3441]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:29.251056 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:42486.service: Deactivated successfully. Jul 2 00:20:29.253213 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:20:29.254091 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:20:29.255113 systemd-logind[1443]: Removed session 9. Jul 2 00:20:29.758384 kubelet[2567]: E0702 00:20:29.758346 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:30.067019 systemd[1]: cri-containerd-0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838.scope: Deactivated successfully. Jul 2 00:20:30.067429 kubelet[2567]: I0702 00:20:30.067268 2567 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:20:30.097730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838-rootfs.mount: Deactivated successfully. Jul 2 00:20:30.137934 kubelet[2567]: I0702 00:20:30.137199 2567 topology_manager.go:215] "Topology Admit Handler" podUID="76223419-840c-46e2-a1d7-e6871c06b488" podNamespace="kube-system" podName="coredns-5dd5756b68-s85f2" Jul 2 00:20:30.141200 kubelet[2567]: I0702 00:20:30.141144 2567 topology_manager.go:215] "Topology Admit Handler" podUID="f79a9b2f-f617-4af5-98dd-a2cf84643f11" podNamespace="kube-system" podName="coredns-5dd5756b68-qdb7m" Jul 2 00:20:30.142496 kubelet[2567]: I0702 00:20:30.142438 2567 topology_manager.go:215] "Topology Admit Handler" podUID="d87f4c49-ede2-4763-a126-c48eb4c2c45e" podNamespace="calico-system" podName="calico-kube-controllers-75688ffb6-2922z" Jul 2 00:20:30.146930 systemd[1]: Created slice kubepods-burstable-pod76223419_840c_46e2_a1d7_e6871c06b488.slice - libcontainer container kubepods-burstable-pod76223419_840c_46e2_a1d7_e6871c06b488.slice. Jul 2 00:20:30.154027 systemd[1]: Created slice kubepods-burstable-podf79a9b2f_f617_4af5_98dd_a2cf84643f11.slice - libcontainer container kubepods-burstable-podf79a9b2f_f617_4af5_98dd_a2cf84643f11.slice. Jul 2 00:20:30.161039 systemd[1]: Created slice kubepods-besteffort-podd87f4c49_ede2_4763_a126_c48eb4c2c45e.slice - libcontainer container kubepods-besteffort-podd87f4c49_ede2_4763_a126_c48eb4c2c45e.slice. Jul 2 00:20:30.203313 kubelet[2567]: I0702 00:20:30.203231 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscld\" (UniqueName: \"kubernetes.io/projected/76223419-840c-46e2-a1d7-e6871c06b488-kube-api-access-pscld\") pod \"coredns-5dd5756b68-s85f2\" (UID: \"76223419-840c-46e2-a1d7-e6871c06b488\") " pod="kube-system/coredns-5dd5756b68-s85f2" Jul 2 00:20:30.203313 kubelet[2567]: I0702 00:20:30.203326 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d87f4c49-ede2-4763-a126-c48eb4c2c45e-tigera-ca-bundle\") pod \"calico-kube-controllers-75688ffb6-2922z\" (UID: \"d87f4c49-ede2-4763-a126-c48eb4c2c45e\") " pod="calico-system/calico-kube-controllers-75688ffb6-2922z" Jul 2 00:20:30.203575 kubelet[2567]: I0702 00:20:30.203387 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f79a9b2f-f617-4af5-98dd-a2cf84643f11-config-volume\") pod \"coredns-5dd5756b68-qdb7m\" (UID: \"f79a9b2f-f617-4af5-98dd-a2cf84643f11\") " pod="kube-system/coredns-5dd5756b68-qdb7m" Jul 2 00:20:30.203575 kubelet[2567]: I0702 00:20:30.203433 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tch\" (UniqueName: \"kubernetes.io/projected/f79a9b2f-f617-4af5-98dd-a2cf84643f11-kube-api-access-q6tch\") pod \"coredns-5dd5756b68-qdb7m\" (UID: \"f79a9b2f-f617-4af5-98dd-a2cf84643f11\") " pod="kube-system/coredns-5dd5756b68-qdb7m" Jul 2 00:20:30.203575 kubelet[2567]: I0702 00:20:30.203531 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxhfl\" (UniqueName: \"kubernetes.io/projected/d87f4c49-ede2-4763-a126-c48eb4c2c45e-kube-api-access-wxhfl\") pod \"calico-kube-controllers-75688ffb6-2922z\" (UID: \"d87f4c49-ede2-4763-a126-c48eb4c2c45e\") " pod="calico-system/calico-kube-controllers-75688ffb6-2922z" Jul 2 00:20:30.203713 kubelet[2567]: I0702 00:20:30.203593 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76223419-840c-46e2-a1d7-e6871c06b488-config-volume\") pod \"coredns-5dd5756b68-s85f2\" (UID: \"76223419-840c-46e2-a1d7-e6871c06b488\") " pod="kube-system/coredns-5dd5756b68-s85f2" Jul 2 00:20:30.250149 containerd[1464]: time="2024-07-02T00:20:30.250071227Z" level=info msg="shim disconnected" id=0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838 namespace=k8s.io Jul 2 00:20:30.250149 containerd[1464]: time="2024-07-02T00:20:30.250137311Z" level=warning msg="cleaning up after shim disconnected" id=0a6d24aba9cfdc38bbfdef7180fc78f0fc6ab5048a9660b4d9c5c21c1a0be838 namespace=k8s.io Jul 2 00:20:30.250149 containerd[1464]: time="2024-07-02T00:20:30.250148041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:20:30.451373 kubelet[2567]: E0702 00:20:30.451193 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:30.452013 containerd[1464]: time="2024-07-02T00:20:30.451944178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-s85f2,Uid:76223419-840c-46e2-a1d7-e6871c06b488,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:30.457827 kubelet[2567]: E0702 00:20:30.457793 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:30.458547 containerd[1464]: time="2024-07-02T00:20:30.458444779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qdb7m,Uid:f79a9b2f-f617-4af5-98dd-a2cf84643f11,Namespace:kube-system,Attempt:0,}" Jul 2 00:20:30.464820 containerd[1464]: time="2024-07-02T00:20:30.464765481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75688ffb6-2922z,Uid:d87f4c49-ede2-4763-a126-c48eb4c2c45e,Namespace:calico-system,Attempt:0,}" Jul 2 00:20:30.724316 containerd[1464]: time="2024-07-02T00:20:30.724103762Z" level=error msg="Failed to destroy network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.724597 containerd[1464]: time="2024-07-02T00:20:30.724555509Z" level=error msg="encountered an error cleaning up failed sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.724639 containerd[1464]: time="2024-07-02T00:20:30.724607446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qdb7m,Uid:f79a9b2f-f617-4af5-98dd-a2cf84643f11,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.725398 kubelet[2567]: E0702 00:20:30.724932 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.725398 kubelet[2567]: E0702 00:20:30.724997 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qdb7m" Jul 2 00:20:30.725398 kubelet[2567]: E0702 00:20:30.725037 2567 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qdb7m" Jul 2 00:20:30.725657 kubelet[2567]: E0702 00:20:30.725087 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-qdb7m_kube-system(f79a9b2f-f617-4af5-98dd-a2cf84643f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-qdb7m_kube-system(f79a9b2f-f617-4af5-98dd-a2cf84643f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qdb7m" podUID="f79a9b2f-f617-4af5-98dd-a2cf84643f11" Jul 2 00:20:30.728144 containerd[1464]: time="2024-07-02T00:20:30.727988381Z" level=error msg="Failed to destroy network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.728668 containerd[1464]: time="2024-07-02T00:20:30.728628872Z" level=error msg="encountered an error cleaning up failed sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.728821 containerd[1464]: time="2024-07-02T00:20:30.728679736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75688ffb6-2922z,Uid:d87f4c49-ede2-4763-a126-c48eb4c2c45e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.729312 kubelet[2567]: E0702 00:20:30.729257 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.729411 kubelet[2567]: E0702 00:20:30.729320 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75688ffb6-2922z" Jul 2 00:20:30.729411 kubelet[2567]: E0702 00:20:30.729344 2567 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75688ffb6-2922z" Jul 2 00:20:30.729411 kubelet[2567]: E0702 00:20:30.729406 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75688ffb6-2922z_calico-system(d87f4c49-ede2-4763-a126-c48eb4c2c45e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75688ffb6-2922z_calico-system(d87f4c49-ede2-4763-a126-c48eb4c2c45e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75688ffb6-2922z" podUID="d87f4c49-ede2-4763-a126-c48eb4c2c45e" Jul 2 00:20:30.729858 containerd[1464]: time="2024-07-02T00:20:30.729816927Z" level=error msg="Failed to destroy network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.730558 containerd[1464]: time="2024-07-02T00:20:30.730475321Z" level=error msg="encountered an error cleaning up failed sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.730660 containerd[1464]: time="2024-07-02T00:20:30.730578144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-s85f2,Uid:76223419-840c-46e2-a1d7-e6871c06b488,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.730772 kubelet[2567]: E0702 00:20:30.730751 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:30.730772 kubelet[2567]: E0702 00:20:30.730780 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-s85f2" Jul 2 00:20:30.730895 kubelet[2567]: E0702 00:20:30.730797 2567 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-s85f2" Jul 2 00:20:30.730895 kubelet[2567]: E0702 00:20:30.730833 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-s85f2_kube-system(76223419-840c-46e2-a1d7-e6871c06b488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-s85f2_kube-system(76223419-840c-46e2-a1d7-e6871c06b488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-s85f2" podUID="76223419-840c-46e2-a1d7-e6871c06b488" Jul 2 00:20:31.007232 kubelet[2567]: I0702 00:20:31.005732 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:31.007671 containerd[1464]: time="2024-07-02T00:20:31.006252573Z" level=info msg="StopPodSandbox for \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\"" Jul 2 00:20:31.007671 containerd[1464]: time="2024-07-02T00:20:31.006537347Z" level=info msg="Ensure that sandbox a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf in task-service has been cleanup successfully" Jul 2 00:20:31.008551 kubelet[2567]: E0702 00:20:31.008515 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:31.009683 kubelet[2567]: I0702 00:20:31.008817 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:31.009758 containerd[1464]: time="2024-07-02T00:20:31.009523732Z" level=info msg="StopPodSandbox for \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\"" Jul 2 00:20:31.009810 containerd[1464]: time="2024-07-02T00:20:31.009781565Z" level=info msg="Ensure that sandbox 5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0 in task-service has been cleanup successfully" Jul 2 00:20:31.011323 containerd[1464]: time="2024-07-02T00:20:31.011027800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:20:31.012846 kubelet[2567]: I0702 00:20:31.012810 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:20:31.013697 containerd[1464]: time="2024-07-02T00:20:31.013665293Z" level=info msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\"" Jul 2 00:20:31.013905 containerd[1464]: time="2024-07-02T00:20:31.013886877Z" level=info msg="Ensure that sandbox 47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5 in task-service has been cleanup successfully" Jul 2 00:20:31.040282 containerd[1464]: time="2024-07-02T00:20:31.040213596Z" level=error msg="StopPodSandbox for \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\" failed" error="failed to destroy network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:31.040596 kubelet[2567]: E0702 00:20:31.040555 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:31.040596 kubelet[2567]: E0702 00:20:31.040609 2567 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf"} Jul 2 00:20:31.040776 kubelet[2567]: E0702 00:20:31.040645 2567 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d87f4c49-ede2-4763-a126-c48eb4c2c45e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:20:31.040776 kubelet[2567]: E0702 00:20:31.040686 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d87f4c49-ede2-4763-a126-c48eb4c2c45e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75688ffb6-2922z" podUID="d87f4c49-ede2-4763-a126-c48eb4c2c45e" Jul 2 00:20:31.044002 containerd[1464]: time="2024-07-02T00:20:31.043944727Z" level=error msg="StopPodSandbox for \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\" failed" error="failed to destroy network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:31.044252 kubelet[2567]: E0702 00:20:31.044230 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:31.044320 kubelet[2567]: E0702 00:20:31.044282 2567 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0"} Jul 2 00:20:31.044320 kubelet[2567]: E0702 00:20:31.044317 2567 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f79a9b2f-f617-4af5-98dd-a2cf84643f11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:20:31.044425 kubelet[2567]: E0702 00:20:31.044354 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f79a9b2f-f617-4af5-98dd-a2cf84643f11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qdb7m" podUID="f79a9b2f-f617-4af5-98dd-a2cf84643f11" Jul 2 00:20:31.045467 containerd[1464]: time="2024-07-02T00:20:31.045424130Z" level=error msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" failed" error="failed to destroy network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:31.045623 kubelet[2567]: E0702 00:20:31.045599 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:20:31.045663 kubelet[2567]: E0702 00:20:31.045630 2567 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5"} Jul 2 00:20:31.045689 kubelet[2567]: E0702 00:20:31.045671 2567 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76223419-840c-46e2-a1d7-e6871c06b488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:20:31.045737 kubelet[2567]: E0702 00:20:31.045704 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76223419-840c-46e2-a1d7-e6871c06b488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-s85f2" podUID="76223419-840c-46e2-a1d7-e6871c06b488" Jul 2 00:20:31.098350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5-shm.mount: Deactivated successfully. Jul 2 00:20:31.098519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0-shm.mount: Deactivated successfully. Jul 2 00:20:31.764673 systemd[1]: Created slice kubepods-besteffort-poda4d63e15_e37d_4fdd_89ad_91a83354224d.slice - libcontainer container kubepods-besteffort-poda4d63e15_e37d_4fdd_89ad_91a83354224d.slice. Jul 2 00:20:31.766848 containerd[1464]: time="2024-07-02T00:20:31.766810699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kch5r,Uid:a4d63e15-e37d-4fdd-89ad-91a83354224d,Namespace:calico-system,Attempt:0,}" Jul 2 00:20:32.180012 containerd[1464]: time="2024-07-02T00:20:32.179947648Z" level=error msg="Failed to destroy network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:32.180586 containerd[1464]: time="2024-07-02T00:20:32.180516303Z" level=error msg="encountered an error cleaning up failed sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:32.180643 containerd[1464]: time="2024-07-02T00:20:32.180590892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kch5r,Uid:a4d63e15-e37d-4fdd-89ad-91a83354224d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:32.181119 kubelet[2567]: E0702 00:20:32.181095 2567 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:32.181535 kubelet[2567]: E0702 00:20:32.181165 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kch5r" Jul 2 00:20:32.181535 kubelet[2567]: E0702 00:20:32.181197 2567 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kch5r" Jul 2 00:20:32.181535 kubelet[2567]: E0702 00:20:32.181282 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kch5r_calico-system(a4d63e15-e37d-4fdd-89ad-91a83354224d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kch5r_calico-system(a4d63e15-e37d-4fdd-89ad-91a83354224d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:32.182819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a-shm.mount: Deactivated successfully. Jul 2 00:20:33.018453 kubelet[2567]: I0702 00:20:33.018296 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:33.018899 containerd[1464]: time="2024-07-02T00:20:33.018862657Z" level=info msg="StopPodSandbox for \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\"" Jul 2 00:20:33.019462 containerd[1464]: time="2024-07-02T00:20:33.019099159Z" level=info msg="Ensure that sandbox 07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a in task-service has been cleanup successfully" Jul 2 00:20:33.047019 containerd[1464]: time="2024-07-02T00:20:33.046960015Z" level=error msg="StopPodSandbox for \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\" failed" error="failed to destroy network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:33.047341 kubelet[2567]: E0702 00:20:33.047317 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:33.047416 kubelet[2567]: E0702 00:20:33.047370 2567 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a"} Jul 2 00:20:33.047416 kubelet[2567]: E0702 00:20:33.047407 2567 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4d63e15-e37d-4fdd-89ad-91a83354224d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:20:33.047520 kubelet[2567]: E0702 00:20:33.047438 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4d63e15-e37d-4fdd-89ad-91a83354224d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kch5r" podUID="a4d63e15-e37d-4fdd-89ad-91a83354224d" Jul 2 00:20:34.262091 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:42494.service - OpenSSH per-connection server daemon (10.0.0.1:42494). Jul 2 00:20:34.304307 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 42494 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:34.306403 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:34.311230 systemd-logind[1443]: New session 10 of user core. Jul 2 00:20:34.319649 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:20:34.463533 sshd[3728]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:34.467584 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:42494.service: Deactivated successfully. Jul 2 00:20:34.470065 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:20:34.471038 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:20:34.472091 systemd-logind[1443]: Removed session 10. Jul 2 00:20:39.475835 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:42988.service - OpenSSH per-connection server daemon (10.0.0.1:42988). Jul 2 00:20:39.515614 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 42988 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:39.517935 sshd[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:39.523997 systemd-logind[1443]: New session 11 of user core. Jul 2 00:20:39.530746 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:20:39.835991 sshd[3748]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:39.839630 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:42988.service: Deactivated successfully. Jul 2 00:20:39.842837 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:20:39.844991 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:20:39.846338 systemd-logind[1443]: Removed session 11. Jul 2 00:20:40.553345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1813633181.mount: Deactivated successfully. Jul 2 00:20:41.759530 containerd[1464]: time="2024-07-02T00:20:41.759445964Z" level=info msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\"" Jul 2 00:20:43.187989 containerd[1464]: time="2024-07-02T00:20:43.187910590Z" level=error msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" failed" error="failed to destroy network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:20:43.188395 kubelet[2567]: E0702 00:20:43.188212 2567 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:20:43.188395 kubelet[2567]: E0702 00:20:43.188272 2567 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5"} Jul 2 00:20:43.188395 kubelet[2567]: E0702 00:20:43.188309 2567 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76223419-840c-46e2-a1d7-e6871c06b488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:20:43.188395 kubelet[2567]: E0702 00:20:43.188337 2567 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76223419-840c-46e2-a1d7-e6871c06b488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-s85f2" podUID="76223419-840c-46e2-a1d7-e6871c06b488" Jul 2 00:20:43.435731 containerd[1464]: time="2024-07-02T00:20:43.435640523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:43.494519 containerd[1464]: time="2024-07-02T00:20:43.494278073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:20:43.519989 containerd[1464]: time="2024-07-02T00:20:43.519920242Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:43.576609 containerd[1464]: time="2024-07-02T00:20:43.576430946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:43.577610 containerd[1464]: time="2024-07-02T00:20:43.577551947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 12.566480425s" Jul 2 00:20:43.577610 containerd[1464]: time="2024-07-02T00:20:43.577613613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:20:43.592510 containerd[1464]: time="2024-07-02T00:20:43.592440231Z" level=info msg="CreateContainer within sandbox \"0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:20:44.448392 containerd[1464]: time="2024-07-02T00:20:44.448315944Z" level=info msg="CreateContainer within sandbox \"0623b0270915265acb19eea8e72f7c710a32addbdb810d001e073f02a5640d79\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"31435c6ba9c8ef4ebf5526b4ee71b01bd2defedb815166ee03f855ba2b0b0b88\"" Jul 2 00:20:44.449019 containerd[1464]: time="2024-07-02T00:20:44.448983755Z" level=info msg="StartContainer for \"31435c6ba9c8ef4ebf5526b4ee71b01bd2defedb815166ee03f855ba2b0b0b88\"" Jul 2 00:20:44.526071 systemd[1]: Started cri-containerd-31435c6ba9c8ef4ebf5526b4ee71b01bd2defedb815166ee03f855ba2b0b0b88.scope - libcontainer container 31435c6ba9c8ef4ebf5526b4ee71b01bd2defedb815166ee03f855ba2b0b0b88. Jul 2 00:20:44.846807 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:43002.service - OpenSSH per-connection server daemon (10.0.0.1:43002). Jul 2 00:20:45.480533 containerd[1464]: time="2024-07-02T00:20:45.480463897Z" level=info msg="StartContainer for \"31435c6ba9c8ef4ebf5526b4ee71b01bd2defedb815166ee03f855ba2b0b0b88\" returns successfully" Jul 2 00:20:45.487133 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:20:45.487757 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:20:45.571456 sshd[3828]: Accepted publickey for core from 10.0.0.1 port 43002 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:45.574290 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:45.578616 systemd-logind[1443]: New session 12 of user core. Jul 2 00:20:45.586209 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:20:45.718655 sshd[3828]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:45.724348 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:43002.service: Deactivated successfully. Jul 2 00:20:45.727935 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:20:45.730627 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:20:45.732587 systemd-logind[1443]: Removed session 12. Jul 2 00:20:45.759479 containerd[1464]: time="2024-07-02T00:20:45.759412670Z" level=info msg="StopPodSandbox for \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\"" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:45.967 [INFO][3874] k8s.go 608: Cleaning up netns ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:45.967 [INFO][3874] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" iface="eth0" netns="/var/run/netns/cni-f257df04-8432-d9dc-f1c8-ab64350abf2d" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:45.968 [INFO][3874] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" iface="eth0" netns="/var/run/netns/cni-f257df04-8432-d9dc-f1c8-ab64350abf2d" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:45.968 [INFO][3874] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" iface="eth0" netns="/var/run/netns/cni-f257df04-8432-d9dc-f1c8-ab64350abf2d" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:45.968 [INFO][3874] k8s.go 615: Releasing IP address(es) ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:45.968 [INFO][3874] utils.go 188: Calico CNI releasing IP address ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:46.013 [INFO][3882] ipam_plugin.go 411: Releasing address using handleID ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:46.013 [INFO][3882] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:46.013 [INFO][3882] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:46.164 [WARNING][3882] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:46.164 [INFO][3882] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:46.166 [INFO][3882] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:46.173632 containerd[1464]: 2024-07-02 00:20:46.169 [INFO][3874] k8s.go 621: Teardown processing complete. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:46.176223 systemd[1]: run-netns-cni\x2df257df04\x2d8432\x2dd9dc\x2df1c8\x2dab64350abf2d.mount: Deactivated successfully. Jul 2 00:20:46.176618 containerd[1464]: time="2024-07-02T00:20:46.176580672Z" level=info msg="TearDown network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\" successfully" Jul 2 00:20:46.176618 containerd[1464]: time="2024-07-02T00:20:46.176617021Z" level=info msg="StopPodSandbox for \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\" returns successfully" Jul 2 00:20:46.177346 kubelet[2567]: E0702 00:20:46.176949 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:46.177753 containerd[1464]: time="2024-07-02T00:20:46.177335497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qdb7m,Uid:f79a9b2f-f617-4af5-98dd-a2cf84643f11,Namespace:kube-system,Attempt:1,}" Jul 2 00:20:46.490609 kubelet[2567]: E0702 00:20:46.490403 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:46.579307 kubelet[2567]: I0702 00:20:46.579249 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9mr2t" podStartSLOduration=4.898607913 podCreationTimestamp="2024-07-02 00:20:16 +0000 UTC" firstStartedPulling="2024-07-02 00:20:17.966585378 +0000 UTC m=+27.309398354" lastFinishedPulling="2024-07-02 00:20:43.579796334 +0000 UTC m=+52.922609330" observedRunningTime="2024-07-02 00:20:46.511612182 +0000 UTC m=+55.854425168" watchObservedRunningTime="2024-07-02 00:20:46.511818889 +0000 UTC m=+55.854631875" Jul 2 00:20:46.759619 containerd[1464]: time="2024-07-02T00:20:46.759515074Z" level=info msg="StopPodSandbox for \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\"" Jul 2 00:20:46.759984 containerd[1464]: time="2024-07-02T00:20:46.759514873Z" level=info msg="StopPodSandbox for \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\"" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.839 [INFO][3957] k8s.go 608: Cleaning up netns ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.839 [INFO][3957] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" iface="eth0" netns="/var/run/netns/cni-c97feb5e-88d1-73f3-16de-4cdbf00e9933" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.840 [INFO][3957] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" iface="eth0" netns="/var/run/netns/cni-c97feb5e-88d1-73f3-16de-4cdbf00e9933" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.841 [INFO][3957] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" iface="eth0" netns="/var/run/netns/cni-c97feb5e-88d1-73f3-16de-4cdbf00e9933" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.841 [INFO][3957] k8s.go 615: Releasing IP address(es) ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.841 [INFO][3957] utils.go 188: Calico CNI releasing IP address ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.861 [INFO][3970] ipam_plugin.go 411: Releasing address using handleID ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.861 [INFO][3970] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:46.861 [INFO][3970] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:47.025 [WARNING][3970] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:47.025 [INFO][3970] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:47.027 [INFO][3970] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:47.033601 containerd[1464]: 2024-07-02 00:20:47.030 [INFO][3957] k8s.go 621: Teardown processing complete. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:47.035421 containerd[1464]: time="2024-07-02T00:20:47.035341815Z" level=info msg="TearDown network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\" successfully" Jul 2 00:20:47.035421 containerd[1464]: time="2024-07-02T00:20:47.035376250Z" level=info msg="StopPodSandbox for \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\" returns successfully" Jul 2 00:20:47.036656 containerd[1464]: time="2024-07-02T00:20:47.036223058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75688ffb6-2922z,Uid:d87f4c49-ede2-4763-a126-c48eb4c2c45e,Namespace:calico-system,Attempt:1,}" Jul 2 00:20:47.039078 systemd[1]: run-netns-cni\x2dc97feb5e\x2d88d1\x2d73f3\x2d16de\x2d4cdbf00e9933.mount: Deactivated successfully. Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.035 [INFO][3956] k8s.go 608: Cleaning up netns ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.036 [INFO][3956] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" iface="eth0" netns="/var/run/netns/cni-47bafb3a-a45c-3b41-95b2-722107e4fb81" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.036 [INFO][3956] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" iface="eth0" netns="/var/run/netns/cni-47bafb3a-a45c-3b41-95b2-722107e4fb81" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.036 [INFO][3956] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" iface="eth0" netns="/var/run/netns/cni-47bafb3a-a45c-3b41-95b2-722107e4fb81" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.036 [INFO][3956] k8s.go 615: Releasing IP address(es) ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.036 [INFO][3956] utils.go 188: Calico CNI releasing IP address ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.056 [INFO][3991] ipam_plugin.go 411: Releasing address using handleID ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.056 [INFO][3991] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.056 [INFO][3991] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.305 [WARNING][3991] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.305 [INFO][3991] ipam_plugin.go 439: Releasing address using workloadID ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.326 [INFO][3991] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:47.331114 containerd[1464]: 2024-07-02 00:20:47.329 [INFO][3956] k8s.go 621: Teardown processing complete. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:47.332912 containerd[1464]: time="2024-07-02T00:20:47.332865313Z" level=info msg="TearDown network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\" successfully" Jul 2 00:20:47.332912 containerd[1464]: time="2024-07-02T00:20:47.332909364Z" level=info msg="StopPodSandbox for \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\" returns successfully" Jul 2 00:20:47.333758 containerd[1464]: time="2024-07-02T00:20:47.333728550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kch5r,Uid:a4d63e15-e37d-4fdd-89ad-91a83354224d,Namespace:calico-system,Attempt:1,}" Jul 2 00:20:47.335456 systemd[1]: run-netns-cni\x2d47bafb3a\x2da45c\x2d3b41\x2d95b2\x2d722107e4fb81.mount: Deactivated successfully. Jul 2 00:20:47.494709 kubelet[2567]: E0702 00:20:47.494666 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:47.505864 systemd-networkd[1380]: cali9eb4a94175f: Link UP Jul 2 00:20:47.518356 systemd-networkd[1380]: cali9eb4a94175f: Gained carrier Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:46.901 [INFO][3978] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.028 [INFO][3978] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--qdb7m-eth0 coredns-5dd5756b68- kube-system f79a9b2f-f617-4af5-98dd-a2cf84643f11 869 0 2024-07-02 00:20:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-qdb7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9eb4a94175f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.028 [INFO][3978] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.363 [INFO][3999] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" HandleID="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.378 [INFO][3999] ipam_plugin.go 264: Auto assigning IP ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" HandleID="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000128770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-qdb7m", "timestamp":"2024-07-02 00:20:47.363760391 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.378 [INFO][3999] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.379 [INFO][3999] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.379 [INFO][3999] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.381 [INFO][3999] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.386 [INFO][3999] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.391 [INFO][3999] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.393 [INFO][3999] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.395 [INFO][3999] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.395 [INFO][3999] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.397 [INFO][3999] ipam.go 1685: Creating new handle: k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2 Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.400 [INFO][3999] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.488 [INFO][3999] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.488 [INFO][3999] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" host="localhost" Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.488 [INFO][3999] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:47.556999 containerd[1464]: 2024-07-02 00:20:47.488 [INFO][3999] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" HandleID="k8s-pod-network.69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:47.557922 containerd[1464]: 2024-07-02 00:20:47.491 [INFO][3978] k8s.go 386: Populated endpoint ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qdb7m-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f79a9b2f-f617-4af5-98dd-a2cf84643f11", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-qdb7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9eb4a94175f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:47.557922 containerd[1464]: 2024-07-02 00:20:47.492 [INFO][3978] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:47.557922 containerd[1464]: 2024-07-02 00:20:47.492 [INFO][3978] dataplane_linux.go 68: Setting the host side veth name to cali9eb4a94175f ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:47.557922 containerd[1464]: 2024-07-02 00:20:47.505 [INFO][3978] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:47.557922 containerd[1464]: 2024-07-02 00:20:47.514 [INFO][3978] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qdb7m-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f79a9b2f-f617-4af5-98dd-a2cf84643f11", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2", Pod:"coredns-5dd5756b68-qdb7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9eb4a94175f", MAC:"d6:9c:08:b6:a9:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:47.557922 containerd[1464]: 2024-07-02 00:20:47.535 [INFO][3978] k8s.go 500: Wrote updated endpoint to datastore ContainerID="69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2" Namespace="kube-system" Pod="coredns-5dd5756b68-qdb7m" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:48.286285 containerd[1464]: time="2024-07-02T00:20:48.286177963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:48.286285 containerd[1464]: time="2024-07-02T00:20:48.286223158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:48.286285 containerd[1464]: time="2024-07-02T00:20:48.286235762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:48.286285 containerd[1464]: time="2024-07-02T00:20:48.286245228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:48.309786 systemd[1]: Started cri-containerd-69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2.scope - libcontainer container 69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2. Jul 2 00:20:48.334791 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:20:48.366617 containerd[1464]: time="2024-07-02T00:20:48.366566753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qdb7m,Uid:f79a9b2f-f617-4af5-98dd-a2cf84643f11,Namespace:kube-system,Attempt:1,} returns sandbox id \"69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2\"" Jul 2 00:20:48.367479 kubelet[2567]: E0702 00:20:48.367450 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:48.370759 containerd[1464]: time="2024-07-02T00:20:48.370722302Z" level=info msg="CreateContainer within sandbox \"69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:20:48.404438 systemd-networkd[1380]: calie464276c5ac: Link UP Jul 2 00:20:48.404650 systemd-networkd[1380]: calie464276c5ac: Gained carrier Jul 2 00:20:48.435299 systemd-networkd[1380]: vxlan.calico: Link UP Jul 2 00:20:48.435308 systemd-networkd[1380]: vxlan.calico: Gained carrier Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:47.777 [INFO][4167] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:47.896 [INFO][4167] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0 calico-kube-controllers-75688ffb6- calico-system d87f4c49-ede2-4763-a126-c48eb4c2c45e 878 0 2024-07-02 00:20:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75688ffb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75688ffb6-2922z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie464276c5ac [] []}} ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:47.896 [INFO][4167] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.233 [INFO][4198] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" HandleID="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.309 [INFO][4198] ipam_plugin.go 264: Auto assigning IP ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" HandleID="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75688ffb6-2922z", "timestamp":"2024-07-02 00:20:48.233558263 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.309 [INFO][4198] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.309 [INFO][4198] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.309 [INFO][4198] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.314 [INFO][4198] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.321 [INFO][4198] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.328 [INFO][4198] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.331 [INFO][4198] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.333 [INFO][4198] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.333 [INFO][4198] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.335 [INFO][4198] ipam.go 1685: Creating new handle: k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94 Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.339 [INFO][4198] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.391 [INFO][4198] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.391 [INFO][4198] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" host="localhost" Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.391 [INFO][4198] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:48.613652 containerd[1464]: 2024-07-02 00:20:48.391 [INFO][4198] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" HandleID="k8s-pod-network.352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:48.614335 containerd[1464]: 2024-07-02 00:20:48.399 [INFO][4167] k8s.go 386: Populated endpoint ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0", GenerateName:"calico-kube-controllers-75688ffb6-", Namespace:"calico-system", SelfLink:"", UID:"d87f4c49-ede2-4763-a126-c48eb4c2c45e", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75688ffb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75688ffb6-2922z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie464276c5ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:48.614335 containerd[1464]: 2024-07-02 00:20:48.400 [INFO][4167] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:48.614335 containerd[1464]: 2024-07-02 00:20:48.400 [INFO][4167] dataplane_linux.go 68: Setting the host side veth name to calie464276c5ac ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:48.614335 containerd[1464]: 2024-07-02 00:20:48.404 [INFO][4167] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:48.614335 containerd[1464]: 2024-07-02 00:20:48.405 [INFO][4167] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0", GenerateName:"calico-kube-controllers-75688ffb6-", Namespace:"calico-system", SelfLink:"", UID:"d87f4c49-ede2-4763-a126-c48eb4c2c45e", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75688ffb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94", Pod:"calico-kube-controllers-75688ffb6-2922z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie464276c5ac", MAC:"e2:02:14:eb:79:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:48.614335 containerd[1464]: 2024-07-02 00:20:48.610 [INFO][4167] k8s.go 500: Wrote updated endpoint to datastore ContainerID="352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94" Namespace="calico-system" Pod="calico-kube-controllers-75688ffb6-2922z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:48.749280 containerd[1464]: time="2024-07-02T00:20:48.749081476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:48.749280 containerd[1464]: time="2024-07-02T00:20:48.749127582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:48.749280 containerd[1464]: time="2024-07-02T00:20:48.749140076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:48.749280 containerd[1464]: time="2024-07-02T00:20:48.749149173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:48.773621 systemd[1]: Started cri-containerd-352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94.scope - libcontainer container 352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94. Jul 2 00:20:48.785747 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:20:48.810253 containerd[1464]: time="2024-07-02T00:20:48.810190067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75688ffb6-2922z,Uid:d87f4c49-ede2-4763-a126-c48eb4c2c45e,Namespace:calico-system,Attempt:1,} returns sandbox id \"352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94\"" Jul 2 00:20:48.811611 containerd[1464]: time="2024-07-02T00:20:48.811583780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:20:49.106656 systemd-networkd[1380]: cali9eb4a94175f: Gained IPv6LL Jul 2 00:20:49.420657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229366086.mount: Deactivated successfully. Jul 2 00:20:49.617763 systemd-networkd[1380]: cali9a1ad43b099: Link UP Jul 2 00:20:49.618400 systemd-networkd[1380]: cali9a1ad43b099: Gained carrier Jul 2 00:20:49.618848 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.331 [INFO][4342] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kch5r-eth0 csi-node-driver- calico-system a4d63e15-e37d-4fdd-89ad-91a83354224d 879 0 2024-07-02 00:20:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-kch5r eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9a1ad43b099 [] []}} ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.331 [INFO][4342] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.443 [INFO][4398] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" HandleID="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.540 [INFO][4398] ipam_plugin.go 264: Auto assigning IP ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" HandleID="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kch5r", "timestamp":"2024-07-02 00:20:49.44358973 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.540 [INFO][4398] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.540 [INFO][4398] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.540 [INFO][4398] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.542 [INFO][4398] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.546 [INFO][4398] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.549 [INFO][4398] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.550 [INFO][4398] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.552 [INFO][4398] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.552 [INFO][4398] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.554 [INFO][4398] ipam.go 1685: Creating new handle: k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423 Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.557 [INFO][4398] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.612 [INFO][4398] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.612 [INFO][4398] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" host="localhost" Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.612 [INFO][4398] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:49.639798 containerd[1464]: 2024-07-02 00:20:49.612 [INFO][4398] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" HandleID="k8s-pod-network.2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:49.642954 containerd[1464]: 2024-07-02 00:20:49.615 [INFO][4342] k8s.go 386: Populated endpoint ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kch5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4d63e15-e37d-4fdd-89ad-91a83354224d", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kch5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9a1ad43b099", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:49.642954 containerd[1464]: 2024-07-02 00:20:49.615 [INFO][4342] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:49.642954 containerd[1464]: 2024-07-02 00:20:49.615 [INFO][4342] dataplane_linux.go 68: Setting the host side veth name to cali9a1ad43b099 ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:49.642954 containerd[1464]: 2024-07-02 00:20:49.617 [INFO][4342] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:49.642954 containerd[1464]: 2024-07-02 00:20:49.618 [INFO][4342] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kch5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4d63e15-e37d-4fdd-89ad-91a83354224d", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423", Pod:"csi-node-driver-kch5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9a1ad43b099", MAC:"ea:90:c9:07:fa:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:49.642954 containerd[1464]: 2024-07-02 00:20:49.636 [INFO][4342] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423" Namespace="calico-system" Pod="csi-node-driver-kch5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:49.874652 systemd-networkd[1380]: calie464276c5ac: Gained IPv6LL Jul 2 00:20:50.018473 containerd[1464]: time="2024-07-02T00:20:50.018368829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:20:50.019069 containerd[1464]: time="2024-07-02T00:20:50.019024098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:50.019069 containerd[1464]: time="2024-07-02T00:20:50.019049656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:20:50.019069 containerd[1464]: time="2024-07-02T00:20:50.019063271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:20:50.040628 systemd[1]: Started cri-containerd-2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423.scope - libcontainer container 2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423. Jul 2 00:20:50.052317 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:20:50.063698 containerd[1464]: time="2024-07-02T00:20:50.063648595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kch5r,Uid:a4d63e15-e37d-4fdd-89ad-91a83354224d,Namespace:calico-system,Attempt:1,} returns sandbox id \"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423\"" Jul 2 00:20:50.260926 containerd[1464]: time="2024-07-02T00:20:50.260797360Z" level=info msg="CreateContainer within sandbox \"69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c99130e50674281c2bf7e8c9e450edd1e3a8b53f5381c47dc004b2d54e5ead80\"" Jul 2 00:20:50.261294 containerd[1464]: time="2024-07-02T00:20:50.261267261Z" level=info msg="StartContainer for \"c99130e50674281c2bf7e8c9e450edd1e3a8b53f5381c47dc004b2d54e5ead80\"" Jul 2 00:20:50.289620 systemd[1]: Started cri-containerd-c99130e50674281c2bf7e8c9e450edd1e3a8b53f5381c47dc004b2d54e5ead80.scope - libcontainer container c99130e50674281c2bf7e8c9e450edd1e3a8b53f5381c47dc004b2d54e5ead80. Jul 2 00:20:50.613124 containerd[1464]: time="2024-07-02T00:20:50.613037237Z" level=info msg="StartContainer for \"c99130e50674281c2bf7e8c9e450edd1e3a8b53f5381c47dc004b2d54e5ead80\" returns successfully" Jul 2 00:20:50.617249 kubelet[2567]: E0702 00:20:50.617202 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:50.639596 kubelet[2567]: I0702 00:20:50.639133 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qdb7m" podStartSLOduration=48.63909441 podCreationTimestamp="2024-07-02 00:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:20:50.638852838 +0000 UTC m=+59.981665824" watchObservedRunningTime="2024-07-02 00:20:50.63909441 +0000 UTC m=+59.981907396" Jul 2 00:20:50.738793 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:60018.service - OpenSSH per-connection server daemon (10.0.0.1:60018). Jul 2 00:20:50.745181 containerd[1464]: time="2024-07-02T00:20:50.744871024Z" level=info msg="StopPodSandbox for \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\"" Jul 2 00:20:50.854731 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 60018 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:50.856464 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:50.861732 systemd-logind[1443]: New session 13 of user core. Jul 2 00:20:50.870616 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.784 [WARNING][4508] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kch5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4d63e15-e37d-4fdd-89ad-91a83354224d", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423", Pod:"csi-node-driver-kch5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9a1ad43b099", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.828 [INFO][4508] k8s.go 608: Cleaning up netns ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.829 [INFO][4508] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" iface="eth0" netns="" Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.829 [INFO][4508] k8s.go 615: Releasing IP address(es) ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.829 [INFO][4508] utils.go 188: Calico CNI releasing IP address ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.854 [INFO][4518] ipam_plugin.go 411: Releasing address using handleID ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.854 [INFO][4518] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.854 [INFO][4518] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.904 [WARNING][4518] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.904 [INFO][4518] ipam_plugin.go 439: Releasing address using workloadID ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.910 [INFO][4518] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:50.916350 containerd[1464]: 2024-07-02 00:20:50.913 [INFO][4508] k8s.go 621: Teardown processing complete. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:50.917015 containerd[1464]: time="2024-07-02T00:20:50.916415114Z" level=info msg="TearDown network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\" successfully" Jul 2 00:20:50.917015 containerd[1464]: time="2024-07-02T00:20:50.916449809Z" level=info msg="StopPodSandbox for \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\" returns successfully" Jul 2 00:20:50.917445 containerd[1464]: time="2024-07-02T00:20:50.917403917Z" level=info msg="RemovePodSandbox for \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\"" Jul 2 00:20:50.920728 containerd[1464]: time="2024-07-02T00:20:50.920666914Z" level=info msg="Forcibly stopping sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\"" Jul 2 00:20:51.089389 sshd[4492]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:51.090645 systemd-networkd[1380]: cali9a1ad43b099: Gained IPv6LL Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:50.990 [WARNING][4548] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kch5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4d63e15-e37d-4fdd-89ad-91a83354224d", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423", Pod:"csi-node-driver-kch5r", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9a1ad43b099", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:50.990 [INFO][4548] k8s.go 608: Cleaning up netns ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:50.990 [INFO][4548] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" iface="eth0" netns="" Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:50.990 [INFO][4548] k8s.go 615: Releasing IP address(es) ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:50.991 [INFO][4548] utils.go 188: Calico CNI releasing IP address ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:51.030 [INFO][4564] ipam_plugin.go 411: Releasing address using handleID ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:51.030 [INFO][4564] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:51.030 [INFO][4564] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:51.084 [WARNING][4564] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:51.084 [INFO][4564] ipam_plugin.go 439: Releasing address using workloadID ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" HandleID="k8s-pod-network.07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Workload="localhost-k8s-csi--node--driver--kch5r-eth0" Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:51.086 [INFO][4564] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:51.091629 containerd[1464]: 2024-07-02 00:20:51.088 [INFO][4548] k8s.go 621: Teardown processing complete. ContainerID="07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a" Jul 2 00:20:51.092327 containerd[1464]: time="2024-07-02T00:20:51.092108745Z" level=info msg="TearDown network for sandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\" successfully" Jul 2 00:20:51.093578 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:20:51.093983 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:60018.service: Deactivated successfully. Jul 2 00:20:51.096111 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:20:51.098572 systemd-logind[1443]: Removed session 13. Jul 2 00:20:51.228505 containerd[1464]: time="2024-07-02T00:20:51.228324620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:20:51.233730 containerd[1464]: time="2024-07-02T00:20:51.233664800Z" level=info msg="RemovePodSandbox \"07e4329b5b9ef91d3db9a366a2e0b7ba711ecf58caaff18acc7a775ad4c8ae0a\" returns successfully" Jul 2 00:20:51.234423 containerd[1464]: time="2024-07-02T00:20:51.234396532Z" level=info msg="StopPodSandbox for \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\"" Jul 2 00:20:51.234538 containerd[1464]: time="2024-07-02T00:20:51.234519702Z" level=info msg="TearDown network for sandbox \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" successfully" Jul 2 00:20:51.234569 containerd[1464]: time="2024-07-02T00:20:51.234538106Z" level=info msg="StopPodSandbox for \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" returns successfully" Jul 2 00:20:51.234860 containerd[1464]: time="2024-07-02T00:20:51.234830806Z" level=info msg="RemovePodSandbox for \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\"" Jul 2 00:20:51.234860 containerd[1464]: time="2024-07-02T00:20:51.234855803Z" level=info msg="Forcibly stopping sandbox \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\"" Jul 2 00:20:51.234962 containerd[1464]: time="2024-07-02T00:20:51.234911818Z" level=info msg="TearDown network for sandbox \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" successfully" Jul 2 00:20:51.621230 kubelet[2567]: E0702 00:20:51.621181 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:51.681463 containerd[1464]: time="2024-07-02T00:20:51.681334276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:20:51.681463 containerd[1464]: time="2024-07-02T00:20:51.681454782Z" level=info msg="RemovePodSandbox \"b524b2a9e02b0b330a4724ebe96041d016969fec7e8d860d4f4c2fa79a26da68\" returns successfully" Jul 2 00:20:51.682170 containerd[1464]: time="2024-07-02T00:20:51.682118777Z" level=info msg="StopPodSandbox for \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\"" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.055 [WARNING][4599] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0", GenerateName:"calico-kube-controllers-75688ffb6-", Namespace:"calico-system", SelfLink:"", UID:"d87f4c49-ede2-4763-a126-c48eb4c2c45e", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75688ffb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94", Pod:"calico-kube-controllers-75688ffb6-2922z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie464276c5ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.056 [INFO][4599] k8s.go 608: Cleaning up netns ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.056 [INFO][4599] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" iface="eth0" netns="" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.056 [INFO][4599] k8s.go 615: Releasing IP address(es) ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.056 [INFO][4599] utils.go 188: Calico CNI releasing IP address ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.075 [INFO][4606] ipam_plugin.go 411: Releasing address using handleID ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.075 [INFO][4606] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.077 [INFO][4606] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.081 [WARNING][4606] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.081 [INFO][4606] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.083 [INFO][4606] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:52.088429 containerd[1464]: 2024-07-02 00:20:52.085 [INFO][4599] k8s.go 621: Teardown processing complete. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.089340 containerd[1464]: time="2024-07-02T00:20:52.088502026Z" level=info msg="TearDown network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\" successfully" Jul 2 00:20:52.089340 containerd[1464]: time="2024-07-02T00:20:52.088537573Z" level=info msg="StopPodSandbox for \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\" returns successfully" Jul 2 00:20:52.089340 containerd[1464]: time="2024-07-02T00:20:52.089047919Z" level=info msg="RemovePodSandbox for \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\"" Jul 2 00:20:52.089340 containerd[1464]: time="2024-07-02T00:20:52.089074860Z" level=info msg="Forcibly stopping sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\"" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.418 [WARNING][4629] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0", GenerateName:"calico-kube-controllers-75688ffb6-", Namespace:"calico-system", SelfLink:"", UID:"d87f4c49-ede2-4763-a126-c48eb4c2c45e", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75688ffb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94", Pod:"calico-kube-controllers-75688ffb6-2922z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie464276c5ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.418 [INFO][4629] k8s.go 608: Cleaning up netns ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.418 [INFO][4629] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" iface="eth0" netns="" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.418 [INFO][4629] k8s.go 615: Releasing IP address(es) ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.418 [INFO][4629] utils.go 188: Calico CNI releasing IP address ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.539 [INFO][4636] ipam_plugin.go 411: Releasing address using handleID ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.539 [INFO][4636] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.539 [INFO][4636] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.614 [WARNING][4636] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.614 [INFO][4636] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" HandleID="k8s-pod-network.a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Workload="localhost-k8s-calico--kube--controllers--75688ffb6--2922z-eth0" Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.618 [INFO][4636] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:52.622819 containerd[1464]: 2024-07-02 00:20:52.620 [INFO][4629] k8s.go 621: Teardown processing complete. ContainerID="a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf" Jul 2 00:20:52.623318 containerd[1464]: time="2024-07-02T00:20:52.622869841Z" level=info msg="TearDown network for sandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\" successfully" Jul 2 00:20:52.876600 containerd[1464]: time="2024-07-02T00:20:52.876368506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:20:52.876600 containerd[1464]: time="2024-07-02T00:20:52.876472211Z" level=info msg="RemovePodSandbox \"a7ca6083e8abf82edf47685d98b42dceaac1d06b368e145aa60995266e3353cf\" returns successfully" Jul 2 00:20:52.877192 containerd[1464]: time="2024-07-02T00:20:52.877136125Z" level=info msg="StopPodSandbox for \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\"" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.105 [WARNING][4659] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qdb7m-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f79a9b2f-f617-4af5-98dd-a2cf84643f11", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2", Pod:"coredns-5dd5756b68-qdb7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9eb4a94175f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.106 [INFO][4659] k8s.go 608: Cleaning up netns ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.106 [INFO][4659] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" iface="eth0" netns="" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.106 [INFO][4659] k8s.go 615: Releasing IP address(es) ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.106 [INFO][4659] utils.go 188: Calico CNI releasing IP address ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.124 [INFO][4667] ipam_plugin.go 411: Releasing address using handleID ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.124 [INFO][4667] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.124 [INFO][4667] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.288 [WARNING][4667] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.288 [INFO][4667] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.290 [INFO][4667] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:53.296431 containerd[1464]: 2024-07-02 00:20:53.293 [INFO][4659] k8s.go 621: Teardown processing complete. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.297172 containerd[1464]: time="2024-07-02T00:20:53.296518207Z" level=info msg="TearDown network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\" successfully" Jul 2 00:20:53.297172 containerd[1464]: time="2024-07-02T00:20:53.296553233Z" level=info msg="StopPodSandbox for \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\" returns successfully" Jul 2 00:20:53.297172 containerd[1464]: time="2024-07-02T00:20:53.297160692Z" level=info msg="RemovePodSandbox for \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\"" Jul 2 00:20:53.297247 containerd[1464]: time="2024-07-02T00:20:53.297195547Z" level=info msg="Forcibly stopping sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\"" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.336 [WARNING][4690] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--qdb7m-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f79a9b2f-f617-4af5-98dd-a2cf84643f11", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69305cc276fbf4873bcff050d61bd9e5ced5ee6df08de7e4930ae77a807d1db2", Pod:"coredns-5dd5756b68-qdb7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9eb4a94175f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.336 [INFO][4690] k8s.go 608: Cleaning up netns ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.336 [INFO][4690] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" iface="eth0" netns="" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.336 [INFO][4690] k8s.go 615: Releasing IP address(es) ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.336 [INFO][4690] utils.go 188: Calico CNI releasing IP address ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.359 [INFO][4698] ipam_plugin.go 411: Releasing address using handleID ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.359 [INFO][4698] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.359 [INFO][4698] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.641 [WARNING][4698] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.641 [INFO][4698] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" HandleID="k8s-pod-network.5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Workload="localhost-k8s-coredns--5dd5756b68--qdb7m-eth0" Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.643 [INFO][4698] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:53.647788 containerd[1464]: 2024-07-02 00:20:53.645 [INFO][4690] k8s.go 621: Teardown processing complete. ContainerID="5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0" Jul 2 00:20:53.647788 containerd[1464]: time="2024-07-02T00:20:53.647765542Z" level=info msg="TearDown network for sandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\" successfully" Jul 2 00:20:53.770710 containerd[1464]: time="2024-07-02T00:20:53.770611410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:20:53.770710 containerd[1464]: time="2024-07-02T00:20:53.770703143Z" level=info msg="RemovePodSandbox \"5967fc312f1248c17695275aa57c61dc72cad8bd78623927d3348a8866da3eb0\" returns successfully" Jul 2 00:20:55.759739 containerd[1464]: time="2024-07-02T00:20:55.759675778Z" level=info msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\"" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:55.997 [INFO][4734] k8s.go 608: Cleaning up netns ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:55.997 [INFO][4734] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" iface="eth0" netns="/var/run/netns/cni-bfe9c598-f964-5253-2b24-ff0802bf0881" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:55.997 [INFO][4734] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" iface="eth0" netns="/var/run/netns/cni-bfe9c598-f964-5253-2b24-ff0802bf0881" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:55.998 [INFO][4734] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" iface="eth0" netns="/var/run/netns/cni-bfe9c598-f964-5253-2b24-ff0802bf0881" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:55.998 [INFO][4734] k8s.go 615: Releasing IP address(es) ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:55.998 [INFO][4734] utils.go 188: Calico CNI releasing IP address ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:56.018 [INFO][4743] ipam_plugin.go 411: Releasing address using handleID ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:56.018 [INFO][4743] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:56.018 [INFO][4743] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:56.024 [WARNING][4743] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:56.024 [INFO][4743] ipam_plugin.go 439: Releasing address using workloadID ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:56.026 [INFO][4743] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:20:56.031248 containerd[1464]: 2024-07-02 00:20:56.028 [INFO][4734] k8s.go 621: Teardown processing complete. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:20:56.031706 containerd[1464]: time="2024-07-02T00:20:56.031459319Z" level=info msg="TearDown network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" successfully" Jul 2 00:20:56.031706 containerd[1464]: time="2024-07-02T00:20:56.031510295Z" level=info msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" returns successfully" Jul 2 00:20:56.032168 kubelet[2567]: E0702 00:20:56.032129 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:20:56.033007 containerd[1464]: time="2024-07-02T00:20:56.032960483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-s85f2,Uid:76223419-840c-46e2-a1d7-e6871c06b488,Namespace:kube-system,Attempt:1,}" Jul 2 00:20:56.034853 systemd[1]: run-netns-cni\x2dbfe9c598\x2df964\x2d5253\x2d2b24\x2dff0802bf0881.mount: Deactivated successfully. Jul 2 00:20:56.048911 containerd[1464]: time="2024-07-02T00:20:56.048823100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:56.109776 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:60028.service - OpenSSH per-connection server daemon (10.0.0.1:60028). Jul 2 00:20:56.163198 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:56.165118 sshd[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:56.169395 systemd-logind[1443]: New session 14 of user core. Jul 2 00:20:56.176675 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:20:56.241047 containerd[1464]: time="2024-07-02T00:20:56.240959149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:20:56.279278 containerd[1464]: time="2024-07-02T00:20:56.279180849Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:56.314373 sshd[4751]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:56.327964 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:60028.service: Deactivated successfully. Jul 2 00:20:56.330563 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:20:56.332532 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:20:56.337776 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:60040.service - OpenSSH per-connection server daemon (10.0.0.1:60040). Jul 2 00:20:56.339825 systemd-logind[1443]: Removed session 14. Jul 2 00:20:56.371123 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 60040 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:56.373022 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:56.377120 systemd-logind[1443]: New session 15 of user core. Jul 2 00:20:56.384593 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:20:56.501070 containerd[1464]: time="2024-07-02T00:20:56.500938418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:20:56.502185 containerd[1464]: time="2024-07-02T00:20:56.502051955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 7.69043889s" Jul 2 00:20:56.502185 containerd[1464]: time="2024-07-02T00:20:56.502090126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:20:56.511859 containerd[1464]: time="2024-07-02T00:20:56.511801552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:20:56.528947 containerd[1464]: time="2024-07-02T00:20:56.528892511Z" level=info msg="CreateContainer within sandbox \"352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:20:56.941844 sshd[4767]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:56.951671 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:60040.service: Deactivated successfully. Jul 2 00:20:56.953873 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:20:56.955397 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:20:56.962938 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:60044.service - OpenSSH per-connection server daemon (10.0.0.1:60044). Jul 2 00:20:56.964038 systemd-logind[1443]: Removed session 15. Jul 2 00:20:56.989941 sshd[4783]: Accepted publickey for core from 10.0.0.1 port 60044 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:20:56.991519 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:56.995531 systemd-logind[1443]: New session 16 of user core. Jul 2 00:20:57.008615 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:20:58.107256 sshd[4783]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:58.110913 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:60044.service: Deactivated successfully. Jul 2 00:20:58.112916 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:20:58.113604 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:20:58.114401 systemd-logind[1443]: Removed session 16. Jul 2 00:21:00.460039 kubelet[2567]: E0702 00:21:00.460003 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:01.303865 containerd[1464]: time="2024-07-02T00:21:01.303799955Z" level=info msg="CreateContainer within sandbox \"352bc2f9e970798ecfe16b43cb646ef3ef6ec71806b98c062029e6490583ad94\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"db02a4e11e7d05289f97b784a7425eb0f46424a1738c7684230d5186bafc6fd3\"" Jul 2 00:21:01.304513 containerd[1464]: time="2024-07-02T00:21:01.304430557Z" level=info msg="StartContainer for \"db02a4e11e7d05289f97b784a7425eb0f46424a1738c7684230d5186bafc6fd3\"" Jul 2 00:21:01.332859 systemd[1]: Started cri-containerd-db02a4e11e7d05289f97b784a7425eb0f46424a1738c7684230d5186bafc6fd3.scope - libcontainer container db02a4e11e7d05289f97b784a7425eb0f46424a1738c7684230d5186bafc6fd3. Jul 2 00:21:01.421118 systemd-networkd[1380]: cali89dfe09198d: Link UP Jul 2 00:21:01.421364 systemd-networkd[1380]: cali89dfe09198d: Gained carrier Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:00.905 [INFO][4810] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--s85f2-eth0 coredns-5dd5756b68- kube-system 76223419-840c-46e2-a1d7-e6871c06b488 940 0 2024-07-02 00:20:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-s85f2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89dfe09198d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:00.905 [INFO][4810] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.147 [INFO][4828] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" HandleID="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.155 [INFO][4828] ipam_plugin.go 264: Auto assigning IP ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" HandleID="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e70a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-s85f2", "timestamp":"2024-07-02 00:21:01.147740328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.155 [INFO][4828] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.155 [INFO][4828] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.155 [INFO][4828] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.165 [INFO][4828] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.169 [INFO][4828] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.174 [INFO][4828] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.176 [INFO][4828] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.178 [INFO][4828] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.178 [INFO][4828] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.179 [INFO][4828] ipam.go 1685: Creating new handle: k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.183 [INFO][4828] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.416 [INFO][4828] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.416 [INFO][4828] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" host="localhost" Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.416 [INFO][4828] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:21:01.464317 containerd[1464]: 2024-07-02 00:21:01.416 [INFO][4828] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" HandleID="k8s-pod-network.df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:01.464987 containerd[1464]: 2024-07-02 00:21:01.419 [INFO][4810] k8s.go 386: Populated endpoint ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--s85f2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"76223419-840c-46e2-a1d7-e6871c06b488", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-s85f2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dfe09198d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:21:01.464987 containerd[1464]: 2024-07-02 00:21:01.419 [INFO][4810] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:01.464987 containerd[1464]: 2024-07-02 00:21:01.419 [INFO][4810] dataplane_linux.go 68: Setting the host side veth name to cali89dfe09198d ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:01.464987 containerd[1464]: 2024-07-02 00:21:01.421 [INFO][4810] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:01.464987 containerd[1464]: 2024-07-02 00:21:01.421 [INFO][4810] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--s85f2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"76223419-840c-46e2-a1d7-e6871c06b488", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf", Pod:"coredns-5dd5756b68-s85f2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dfe09198d", MAC:"62:4a:d6:3d:b2:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:21:01.464987 containerd[1464]: 2024-07-02 00:21:01.461 [INFO][4810] k8s.go 500: Wrote updated endpoint to datastore ContainerID="df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf" Namespace="kube-system" Pod="coredns-5dd5756b68-s85f2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:01.521389 containerd[1464]: time="2024-07-02T00:21:01.521319161Z" level=info msg="StartContainer for \"db02a4e11e7d05289f97b784a7425eb0f46424a1738c7684230d5186bafc6fd3\" returns successfully" Jul 2 00:21:01.806587 kubelet[2567]: I0702 00:21:01.806445 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75688ffb6-2922z" podStartSLOduration=46.114006071 podCreationTimestamp="2024-07-02 00:20:08 +0000 UTC" firstStartedPulling="2024-07-02 00:20:48.811197776 +0000 UTC m=+58.154010762" lastFinishedPulling="2024-07-02 00:20:56.503584879 +0000 UTC m=+65.846397875" observedRunningTime="2024-07-02 00:21:01.805785044 +0000 UTC m=+71.148598040" watchObservedRunningTime="2024-07-02 00:21:01.806393184 +0000 UTC m=+71.149206170" Jul 2 00:21:01.813812 containerd[1464]: time="2024-07-02T00:21:01.813708298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:01.813959 containerd[1464]: time="2024-07-02T00:21:01.813794901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:01.813959 containerd[1464]: time="2024-07-02T00:21:01.813822853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:01.813959 containerd[1464]: time="2024-07-02T00:21:01.813836649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:01.832661 systemd[1]: Started cri-containerd-df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf.scope - libcontainer container df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf. Jul 2 00:21:01.862887 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:21:01.890700 containerd[1464]: time="2024-07-02T00:21:01.890652777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-s85f2,Uid:76223419-840c-46e2-a1d7-e6871c06b488,Namespace:kube-system,Attempt:1,} returns sandbox id \"df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf\"" Jul 2 00:21:01.891507 kubelet[2567]: E0702 00:21:01.891462 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:01.893994 containerd[1464]: time="2024-07-02T00:21:01.893958084Z" level=info msg="CreateContainer within sandbox \"df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:21:02.482719 systemd-networkd[1380]: cali89dfe09198d: Gained IPv6LL Jul 2 00:21:03.140752 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:58840.service - OpenSSH per-connection server daemon (10.0.0.1:58840). Jul 2 00:21:03.178077 sshd[4951]: Accepted publickey for core from 10.0.0.1 port 58840 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:03.179984 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:03.184934 systemd-logind[1443]: New session 17 of user core. Jul 2 00:21:03.189762 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:21:03.264934 containerd[1464]: time="2024-07-02T00:21:03.264871624Z" level=info msg="CreateContainer within sandbox \"df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82473ae859596d1fd60621d42eb8e0b72cb35a16ee26dbf22b2269bbd7748f74\"" Jul 2 00:21:03.266725 containerd[1464]: time="2024-07-02T00:21:03.265614997Z" level=info msg="StartContainer for \"82473ae859596d1fd60621d42eb8e0b72cb35a16ee26dbf22b2269bbd7748f74\"" Jul 2 00:21:03.301621 systemd[1]: Started cri-containerd-82473ae859596d1fd60621d42eb8e0b72cb35a16ee26dbf22b2269bbd7748f74.scope - libcontainer container 82473ae859596d1fd60621d42eb8e0b72cb35a16ee26dbf22b2269bbd7748f74. Jul 2 00:21:03.548719 sshd[4951]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:03.554218 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:58840.service: Deactivated successfully. Jul 2 00:21:03.556926 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:21:03.558403 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:21:03.559787 systemd-logind[1443]: Removed session 17. Jul 2 00:21:03.614238 containerd[1464]: time="2024-07-02T00:21:03.614176414Z" level=info msg="StartContainer for \"82473ae859596d1fd60621d42eb8e0b72cb35a16ee26dbf22b2269bbd7748f74\" returns successfully" Jul 2 00:21:03.652343 kubelet[2567]: E0702 00:21:03.652311 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:04.654552 kubelet[2567]: E0702 00:21:04.654510 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:04.668045 kubelet[2567]: I0702 00:21:04.667997 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-s85f2" podStartSLOduration=62.667955943 podCreationTimestamp="2024-07-02 00:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:21:03.725750568 +0000 UTC m=+73.068563554" watchObservedRunningTime="2024-07-02 00:21:04.667955943 +0000 UTC m=+74.010768929" Jul 2 00:21:05.658766 kubelet[2567]: E0702 00:21:05.658719 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:06.020636 containerd[1464]: time="2024-07-02T00:21:06.020417679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:06.063632 containerd[1464]: time="2024-07-02T00:21:06.063526980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:21:06.093929 containerd[1464]: time="2024-07-02T00:21:06.093773767Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:06.285974 containerd[1464]: time="2024-07-02T00:21:06.285903393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:06.286804 containerd[1464]: time="2024-07-02T00:21:06.286766481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 9.77490705s" Jul 2 00:21:06.286804 containerd[1464]: time="2024-07-02T00:21:06.286798631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:21:06.288913 containerd[1464]: time="2024-07-02T00:21:06.288834168Z" level=info msg="CreateContainer within sandbox \"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:21:06.661160 kubelet[2567]: E0702 00:21:06.661035 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:07.021750 containerd[1464]: time="2024-07-02T00:21:07.021610823Z" level=info msg="CreateContainer within sandbox \"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"58f6d808b4ddad33a347bc6bfcaf387de96ccf5ea28bfc8645fbf3b12ac189dd\"" Jul 2 00:21:07.022220 containerd[1464]: time="2024-07-02T00:21:07.022150234Z" level=info msg="StartContainer for \"58f6d808b4ddad33a347bc6bfcaf387de96ccf5ea28bfc8645fbf3b12ac189dd\"" Jul 2 00:21:07.061733 systemd[1]: Started cri-containerd-58f6d808b4ddad33a347bc6bfcaf387de96ccf5ea28bfc8645fbf3b12ac189dd.scope - libcontainer container 58f6d808b4ddad33a347bc6bfcaf387de96ccf5ea28bfc8645fbf3b12ac189dd. Jul 2 00:21:07.128686 containerd[1464]: time="2024-07-02T00:21:07.128621529Z" level=info msg="StartContainer for \"58f6d808b4ddad33a347bc6bfcaf387de96ccf5ea28bfc8645fbf3b12ac189dd\" returns successfully" Jul 2 00:21:07.129827 containerd[1464]: time="2024-07-02T00:21:07.129761286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:21:08.569817 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:37288.service - OpenSSH per-connection server daemon (10.0.0.1:37288). Jul 2 00:21:08.645091 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 37288 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:08.646951 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:08.651472 systemd-logind[1443]: New session 18 of user core. Jul 2 00:21:08.661654 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:21:08.834804 sshd[5059]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:08.839736 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:37288.service: Deactivated successfully. Jul 2 00:21:08.841967 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:21:08.842779 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:21:08.843970 systemd-logind[1443]: Removed session 18. Jul 2 00:21:09.685564 containerd[1464]: time="2024-07-02T00:21:09.685462854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:09.686828 containerd[1464]: time="2024-07-02T00:21:09.686569348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:21:09.699357 containerd[1464]: time="2024-07-02T00:21:09.699228412Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:09.704831 containerd[1464]: time="2024-07-02T00:21:09.704770373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:09.705613 containerd[1464]: time="2024-07-02T00:21:09.705547349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.575751249s" Jul 2 00:21:09.705613 containerd[1464]: time="2024-07-02T00:21:09.705598515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:21:09.710143 containerd[1464]: time="2024-07-02T00:21:09.710077152Z" level=info msg="CreateContainer within sandbox \"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:21:09.744142 containerd[1464]: time="2024-07-02T00:21:09.744082066Z" level=info msg="CreateContainer within sandbox \"2959803581b8aaaebd7ef2a8476b3be70c520d94a2b65a554422b86c387f3423\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1b9c4dececf70f03e3b11faaf2147a60b573cf40b60319f5b1779c705241fa8e\"" Jul 2 00:21:09.747594 containerd[1464]: time="2024-07-02T00:21:09.744809360Z" level=info msg="StartContainer for \"1b9c4dececf70f03e3b11faaf2147a60b573cf40b60319f5b1779c705241fa8e\"" Jul 2 00:21:09.787943 systemd[1]: Started cri-containerd-1b9c4dececf70f03e3b11faaf2147a60b573cf40b60319f5b1779c705241fa8e.scope - libcontainer container 1b9c4dececf70f03e3b11faaf2147a60b573cf40b60319f5b1779c705241fa8e. Jul 2 00:21:09.921198 containerd[1464]: time="2024-07-02T00:21:09.921146336Z" level=info msg="StartContainer for \"1b9c4dececf70f03e3b11faaf2147a60b573cf40b60319f5b1779c705241fa8e\" returns successfully" Jul 2 00:21:10.684185 kubelet[2567]: I0702 00:21:10.683756 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-kch5r" podStartSLOduration=43.042733469 podCreationTimestamp="2024-07-02 00:20:08 +0000 UTC" firstStartedPulling="2024-07-02 00:20:50.064892146 +0000 UTC m=+59.407705132" lastFinishedPulling="2024-07-02 00:21:09.705852741 +0000 UTC m=+79.048665727" observedRunningTime="2024-07-02 00:21:10.682596255 +0000 UTC m=+80.025409241" watchObservedRunningTime="2024-07-02 00:21:10.683694064 +0000 UTC m=+80.026507050" Jul 2 00:21:10.854888 kubelet[2567]: I0702 00:21:10.854322 2567 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:21:10.854888 kubelet[2567]: I0702 00:21:10.854375 2567 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:21:11.759500 kubelet[2567]: E0702 00:21:11.759459 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:13.851870 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:37292.service - OpenSSH per-connection server daemon (10.0.0.1:37292). Jul 2 00:21:13.895251 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 37292 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:13.897455 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:13.903385 systemd-logind[1443]: New session 19 of user core. Jul 2 00:21:13.910763 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:21:14.062206 sshd[5131]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:14.066366 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:21:14.066823 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:37292.service: Deactivated successfully. Jul 2 00:21:14.068946 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:21:14.070320 systemd-logind[1443]: Removed session 19. Jul 2 00:21:17.776451 kubelet[2567]: E0702 00:21:17.776416 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:19.075500 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:40214.service - OpenSSH per-connection server daemon (10.0.0.1:40214). Jul 2 00:21:19.109337 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 40214 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:19.111250 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:19.116222 systemd-logind[1443]: New session 20 of user core. Jul 2 00:21:19.121771 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:21:19.236496 sshd[5171]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:19.240902 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:40214.service: Deactivated successfully. Jul 2 00:21:19.243084 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:21:19.243854 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:21:19.245037 systemd-logind[1443]: Removed session 20. Jul 2 00:21:22.760643 kubelet[2567]: E0702 00:21:22.760592 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:23.759815 kubelet[2567]: E0702 00:21:23.759752 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:24.250499 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:40230.service - OpenSSH per-connection server daemon (10.0.0.1:40230). Jul 2 00:21:24.292889 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 40230 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:24.294771 sshd[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:24.299783 systemd-logind[1443]: New session 21 of user core. Jul 2 00:21:24.309646 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:21:24.431689 sshd[5193]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:24.444999 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:40230.service: Deactivated successfully. Jul 2 00:21:24.447439 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:21:24.450471 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:21:24.459070 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:40240.service - OpenSSH per-connection server daemon (10.0.0.1:40240). Jul 2 00:21:24.460756 systemd-logind[1443]: Removed session 21. Jul 2 00:21:24.488582 sshd[5207]: Accepted publickey for core from 10.0.0.1 port 40240 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:24.490133 sshd[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:24.495145 systemd-logind[1443]: New session 22 of user core. Jul 2 00:21:24.504612 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:21:24.901876 sshd[5207]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:24.914690 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:40240.service: Deactivated successfully. Jul 2 00:21:24.917230 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:21:24.919656 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:21:24.928825 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:40252.service - OpenSSH per-connection server daemon (10.0.0.1:40252). Jul 2 00:21:24.929985 systemd-logind[1443]: Removed session 22. Jul 2 00:21:24.971168 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 40252 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:24.973103 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:24.978463 systemd-logind[1443]: New session 23 of user core. Jul 2 00:21:24.985692 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:21:26.231211 sshd[5220]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:26.242997 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:40252.service: Deactivated successfully. Jul 2 00:21:26.247228 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:21:26.248759 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:21:26.258656 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:40254.service - OpenSSH per-connection server daemon (10.0.0.1:40254). Jul 2 00:21:26.263778 systemd-logind[1443]: Removed session 23. Jul 2 00:21:26.293476 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 40254 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:26.295258 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:26.300132 systemd-logind[1443]: New session 24 of user core. Jul 2 00:21:26.313840 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:21:26.605981 sshd[5239]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:26.617665 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:40254.service: Deactivated successfully. Jul 2 00:21:26.620090 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:21:26.622153 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:21:26.633911 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:40266.service - OpenSSH per-connection server daemon (10.0.0.1:40266). Jul 2 00:21:26.636741 systemd-logind[1443]: Removed session 24. Jul 2 00:21:26.664840 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 40266 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:26.666628 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:26.671468 systemd-logind[1443]: New session 25 of user core. Jul 2 00:21:26.680741 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:21:26.760680 kubelet[2567]: E0702 00:21:26.760623 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:26.876863 sshd[5251]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:26.882556 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:40266.service: Deactivated successfully. Jul 2 00:21:26.885026 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:21:26.885961 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:21:26.887234 systemd-logind[1443]: Removed session 25. Jul 2 00:21:27.759726 kubelet[2567]: E0702 00:21:27.759678 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:30.485095 systemd[1]: run-containerd-runc-k8s.io-db02a4e11e7d05289f97b784a7425eb0f46424a1738c7684230d5186bafc6fd3-runc.5waIHm.mount: Deactivated successfully. Jul 2 00:21:31.889101 systemd[1]: Started sshd@25-10.0.0.84:22-10.0.0.1:60228.service - OpenSSH per-connection server daemon (10.0.0.1:60228). Jul 2 00:21:31.921736 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 60228 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:31.923386 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:31.927848 systemd-logind[1443]: New session 26 of user core. Jul 2 00:21:31.939700 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:21:32.047184 sshd[5297]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:32.051415 systemd[1]: sshd@25-10.0.0.84:22-10.0.0.1:60228.service: Deactivated successfully. Jul 2 00:21:32.053620 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:21:32.054258 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:21:32.055137 systemd-logind[1443]: Removed session 26. Jul 2 00:21:37.062324 systemd[1]: Started sshd@26-10.0.0.84:22-10.0.0.1:60230.service - OpenSSH per-connection server daemon (10.0.0.1:60230). Jul 2 00:21:37.101897 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 60230 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:37.104242 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:37.109664 systemd-logind[1443]: New session 27 of user core. Jul 2 00:21:37.115846 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:21:37.253326 sshd[5316]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:37.259392 systemd[1]: sshd@26-10.0.0.84:22-10.0.0.1:60230.service: Deactivated successfully. Jul 2 00:21:37.263116 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:21:37.264114 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:21:37.265789 systemd-logind[1443]: Removed session 27. Jul 2 00:21:37.768537 kubelet[2567]: I0702 00:21:37.767062 2567 topology_manager.go:215] "Topology Admit Handler" podUID="a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8" podNamespace="calico-apiserver" podName="calico-apiserver-6c6b556c9-nf6r8" Jul 2 00:21:37.777001 kubelet[2567]: W0702 00:21:37.776932 2567 reflector.go:535] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 00:21:37.777179 kubelet[2567]: E0702 00:21:37.777051 2567 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 00:21:37.780468 systemd[1]: Created slice kubepods-besteffort-poda7a06b5b_97fc_4ca8_b2e6_f48bdbf8fad8.slice - libcontainer container kubepods-besteffort-poda7a06b5b_97fc_4ca8_b2e6_f48bdbf8fad8.slice. Jul 2 00:21:37.805468 kubelet[2567]: I0702 00:21:37.805378 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8-calico-apiserver-certs\") pod \"calico-apiserver-6c6b556c9-nf6r8\" (UID: \"a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8\") " pod="calico-apiserver/calico-apiserver-6c6b556c9-nf6r8" Jul 2 00:21:37.805468 kubelet[2567]: I0702 00:21:37.805461 2567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzth8\" (UniqueName: \"kubernetes.io/projected/a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8-kube-api-access-jzth8\") pod \"calico-apiserver-6c6b556c9-nf6r8\" (UID: \"a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8\") " pod="calico-apiserver/calico-apiserver-6c6b556c9-nf6r8" Jul 2 00:21:37.906943 kubelet[2567]: E0702 00:21:37.906900 2567 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:21:37.908689 kubelet[2567]: E0702 00:21:37.908662 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8-calico-apiserver-certs podName:a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8 nodeName:}" failed. No retries permitted until 2024-07-02 00:21:38.406946557 +0000 UTC m=+107.749759543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8-calico-apiserver-certs") pod "calico-apiserver-6c6b556c9-nf6r8" (UID: "a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8") : secret "calico-apiserver-certs" not found Jul 2 00:21:38.985204 containerd[1464]: time="2024-07-02T00:21:38.985131103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b556c9-nf6r8,Uid:a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:21:39.132418 systemd-networkd[1380]: calib81c76e82b9: Link UP Jul 2 00:21:39.133859 systemd-networkd[1380]: calib81c76e82b9: Gained carrier Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.066 [INFO][5336] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0 calico-apiserver-6c6b556c9- calico-apiserver a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8 1227 0 2024-07-02 00:21:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6b556c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c6b556c9-nf6r8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib81c76e82b9 [] []}} ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.066 [INFO][5336] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.090 [INFO][5349] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" HandleID="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Workload="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.099 [INFO][5349] ipam_plugin.go 264: Auto assigning IP ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" HandleID="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Workload="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c6b556c9-nf6r8", "timestamp":"2024-07-02 00:21:39.090014094 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.099 [INFO][5349] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.099 [INFO][5349] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.099 [INFO][5349] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.101 [INFO][5349] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.106 [INFO][5349] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.112 [INFO][5349] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.114 [INFO][5349] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.116 [INFO][5349] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.116 [INFO][5349] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.118 [INFO][5349] ipam.go 1685: Creating new handle: k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03 Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.121 [INFO][5349] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.126 [INFO][5349] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.126 [INFO][5349] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" host="localhost" Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.126 [INFO][5349] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:21:39.146962 containerd[1464]: 2024-07-02 00:21:39.126 [INFO][5349] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" HandleID="k8s-pod-network.5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Workload="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" Jul 2 00:21:39.147745 containerd[1464]: 2024-07-02 00:21:39.129 [INFO][5336] k8s.go 386: Populated endpoint ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0", GenerateName:"calico-apiserver-6c6b556c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b556c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c6b556c9-nf6r8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib81c76e82b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:21:39.147745 containerd[1464]: 2024-07-02 00:21:39.129 [INFO][5336] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" Jul 2 00:21:39.147745 containerd[1464]: 2024-07-02 00:21:39.129 [INFO][5336] dataplane_linux.go 68: Setting the host side veth name to calib81c76e82b9 ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" Jul 2 00:21:39.147745 containerd[1464]: 2024-07-02 00:21:39.134 [INFO][5336] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" Jul 2 00:21:39.147745 containerd[1464]: 2024-07-02 00:21:39.135 [INFO][5336] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0", GenerateName:"calico-apiserver-6c6b556c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6b556c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03", Pod:"calico-apiserver-6c6b556c9-nf6r8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib81c76e82b9", MAC:"66:f9:68:a9:f2:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:21:39.147745 containerd[1464]: 2024-07-02 00:21:39.142 [INFO][5336] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03" Namespace="calico-apiserver" Pod="calico-apiserver-6c6b556c9-nf6r8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c6b556c9--nf6r8-eth0" Jul 2 00:21:39.170343 containerd[1464]: time="2024-07-02T00:21:39.170201479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:39.170343 containerd[1464]: time="2024-07-02T00:21:39.170280236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:39.170343 containerd[1464]: time="2024-07-02T00:21:39.170331953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:39.170639 containerd[1464]: time="2024-07-02T00:21:39.170374814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:39.196784 systemd[1]: Started cri-containerd-5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03.scope - libcontainer container 5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03. Jul 2 00:21:39.210826 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:21:39.240309 containerd[1464]: time="2024-07-02T00:21:39.240035729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6b556c9-nf6r8,Uid:a7a06b5b-97fc-4ca8-b2e6-f48bdbf8fad8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03\"" Jul 2 00:21:39.241986 containerd[1464]: time="2024-07-02T00:21:39.241924881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:21:41.138693 systemd-networkd[1380]: calib81c76e82b9: Gained IPv6LL Jul 2 00:21:42.273402 systemd[1]: Started sshd@27-10.0.0.84:22-10.0.0.1:55122.service - OpenSSH per-connection server daemon (10.0.0.1:55122). Jul 2 00:21:42.314556 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 55122 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:42.316154 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:42.322251 systemd-logind[1443]: New session 28 of user core. Jul 2 00:21:42.328664 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:21:42.498831 sshd[5423]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:42.502988 systemd[1]: sshd@27-10.0.0.84:22-10.0.0.1:55122.service: Deactivated successfully. Jul 2 00:21:42.505315 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:21:42.506061 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:21:42.506997 systemd-logind[1443]: Removed session 28. Jul 2 00:21:42.592354 containerd[1464]: time="2024-07-02T00:21:42.592276029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:42.593862 containerd[1464]: time="2024-07-02T00:21:42.593793004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:21:42.595221 containerd[1464]: time="2024-07-02T00:21:42.595183470Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:42.598599 containerd[1464]: time="2024-07-02T00:21:42.598557657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:42.599280 containerd[1464]: time="2024-07-02T00:21:42.599246570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.357282595s" Jul 2 00:21:42.599280 containerd[1464]: time="2024-07-02T00:21:42.599272939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:21:42.600986 containerd[1464]: time="2024-07-02T00:21:42.600960674Z" level=info msg="CreateContainer within sandbox \"5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:21:42.687472 containerd[1464]: time="2024-07-02T00:21:42.687391326Z" level=info msg="CreateContainer within sandbox \"5b30591ea00ee9b9c598f6f3d2629aad47c62ed5eb2db0bffa7c102fcaf2ed03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"adafd5812c0cdff772ca375dd83a1fbd63392eeb9cbe1561aff836df2a436e38\"" Jul 2 00:21:42.688184 containerd[1464]: time="2024-07-02T00:21:42.688136504Z" level=info msg="StartContainer for \"adafd5812c0cdff772ca375dd83a1fbd63392eeb9cbe1561aff836df2a436e38\"" Jul 2 00:21:42.771777 systemd[1]: Started cri-containerd-adafd5812c0cdff772ca375dd83a1fbd63392eeb9cbe1561aff836df2a436e38.scope - libcontainer container adafd5812c0cdff772ca375dd83a1fbd63392eeb9cbe1561aff836df2a436e38. Jul 2 00:21:42.867948 containerd[1464]: time="2024-07-02T00:21:42.866395951Z" level=info msg="StartContainer for \"adafd5812c0cdff772ca375dd83a1fbd63392eeb9cbe1561aff836df2a436e38\" returns successfully" Jul 2 00:21:43.770987 kubelet[2567]: I0702 00:21:43.769501 2567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c6b556c9-nf6r8" podStartSLOduration=3.411318532 podCreationTimestamp="2024-07-02 00:21:37 +0000 UTC" firstStartedPulling="2024-07-02 00:21:39.241420816 +0000 UTC m=+108.584233802" lastFinishedPulling="2024-07-02 00:21:42.599544588 +0000 UTC m=+111.942357574" observedRunningTime="2024-07-02 00:21:43.768331701 +0000 UTC m=+113.111144687" watchObservedRunningTime="2024-07-02 00:21:43.769442304 +0000 UTC m=+113.112255290" Jul 2 00:21:47.503068 systemd[1]: Started sshd@28-10.0.0.84:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Jul 2 00:21:47.547245 sshd[5511]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:47.549154 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:47.554505 systemd-logind[1443]: New session 29 of user core. Jul 2 00:21:47.559685 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:21:47.678111 sshd[5511]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:47.683455 systemd[1]: sshd@28-10.0.0.84:22-10.0.0.1:55134.service: Deactivated successfully. Jul 2 00:21:47.686124 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:21:47.687406 systemd-logind[1443]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:21:47.688594 systemd-logind[1443]: Removed session 29. Jul 2 00:21:52.690877 systemd[1]: Started sshd@29-10.0.0.84:22-10.0.0.1:34218.service - OpenSSH per-connection server daemon (10.0.0.1:34218). Jul 2 00:21:52.729297 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 34218 ssh2: RSA SHA256:jH6D/oSZ9AmQEzcguf6QpDXy0qnnoD4yyQS8v3Cwkok Jul 2 00:21:52.730943 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:21:52.735892 systemd-logind[1443]: New session 30 of user core. Jul 2 00:21:52.751856 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:21:52.866889 sshd[5559]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:52.871622 systemd[1]: sshd@29-10.0.0.84:22-10.0.0.1:34218.service: Deactivated successfully. Jul 2 00:21:52.874304 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:21:52.876180 systemd-logind[1443]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:21:52.877722 systemd-logind[1443]: Removed session 30. Jul 2 00:21:53.774396 containerd[1464]: time="2024-07-02T00:21:53.774345301Z" level=info msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\"" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.842 [WARNING][5588] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--s85f2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"76223419-840c-46e2-a1d7-e6871c06b488", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf", Pod:"coredns-5dd5756b68-s85f2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dfe09198d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.843 [INFO][5588] k8s.go 608: Cleaning up netns ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.843 [INFO][5588] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" iface="eth0" netns="" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.843 [INFO][5588] k8s.go 615: Releasing IP address(es) ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.843 [INFO][5588] utils.go 188: Calico CNI releasing IP address ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.876 [INFO][5595] ipam_plugin.go 411: Releasing address using handleID ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.877 [INFO][5595] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.877 [INFO][5595] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.883 [WARNING][5595] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.883 [INFO][5595] ipam_plugin.go 439: Releasing address using workloadID ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.885 [INFO][5595] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:21:53.890595 containerd[1464]: 2024-07-02 00:21:53.887 [INFO][5588] k8s.go 621: Teardown processing complete. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.890595 containerd[1464]: time="2024-07-02T00:21:53.890374321Z" level=info msg="TearDown network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" successfully" Jul 2 00:21:53.890595 containerd[1464]: time="2024-07-02T00:21:53.890410669Z" level=info msg="StopPodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" returns successfully" Jul 2 00:21:53.891468 containerd[1464]: time="2024-07-02T00:21:53.891278025Z" level=info msg="RemovePodSandbox for \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\"" Jul 2 00:21:53.891468 containerd[1464]: time="2024-07-02T00:21:53.891320485Z" level=info msg="Forcibly stopping sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\"" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.937 [WARNING][5617] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--s85f2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"76223419-840c-46e2-a1d7-e6871c06b488", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 20, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df2bec54870033f622d6feb38bb2ae42dedb360315cc0128064c8b83c8653abf", Pod:"coredns-5dd5756b68-s85f2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dfe09198d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.938 [INFO][5617] k8s.go 608: Cleaning up netns ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.938 [INFO][5617] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" iface="eth0" netns="" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.938 [INFO][5617] k8s.go 615: Releasing IP address(es) ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.938 [INFO][5617] utils.go 188: Calico CNI releasing IP address ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.963 [INFO][5624] ipam_plugin.go 411: Releasing address using handleID ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.963 [INFO][5624] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.963 [INFO][5624] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.970 [WARNING][5624] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.970 [INFO][5624] ipam_plugin.go 439: Releasing address using workloadID ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" HandleID="k8s-pod-network.47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Workload="localhost-k8s-coredns--5dd5756b68--s85f2-eth0" Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.973 [INFO][5624] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:21:53.978677 containerd[1464]: 2024-07-02 00:21:53.976 [INFO][5617] k8s.go 621: Teardown processing complete. ContainerID="47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5" Jul 2 00:21:53.979836 containerd[1464]: time="2024-07-02T00:21:53.978718967Z" level=info msg="TearDown network for sandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" successfully" Jul 2 00:21:53.989234 containerd[1464]: time="2024-07-02T00:21:53.989139818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:21:53.989423 containerd[1464]: time="2024-07-02T00:21:53.989269221Z" level=info msg="RemovePodSandbox \"47b69079274b9c40dea7b2f668c5541f37e71f22d5f393441700a356d63c1ae5\" returns successfully"