Aug 5 22:22:06.065207 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:22:06.065245 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:22:06.065263 kernel: BIOS-provided physical RAM map: Aug 5 22:22:06.065272 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:22:06.065281 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:22:06.065290 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:22:06.065301 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Aug 5 22:22:06.065310 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Aug 5 22:22:06.065319 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 5 22:22:06.065332 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:22:06.065341 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 5 22:22:06.065350 kernel: NX (Execute Disable) protection: active Aug 5 22:22:06.065360 kernel: APIC: Static calls initialized Aug 5 22:22:06.065371 kernel: SMBIOS 2.8 present. Aug 5 22:22:06.065385 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Aug 5 22:22:06.065402 kernel: Hypervisor detected: KVM Aug 5 22:22:06.065415 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:22:06.065455 kernel: kvm-clock: using sched offset of 2545272155 cycles Aug 5 22:22:06.065469 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:22:06.065481 kernel: tsc: Detected 2794.748 MHz processor Aug 5 22:22:06.065495 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:22:06.065508 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:22:06.065519 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Aug 5 22:22:06.065529 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:22:06.065543 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:22:06.065552 kernel: Using GB pages for direct mapping Aug 5 22:22:06.065562 kernel: ACPI: Early table checksum verification disabled Aug 5 22:22:06.065572 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Aug 5 22:22:06.065582 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:22:06.065592 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:22:06.065602 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:22:06.065611 kernel: ACPI: FACS 0x000000009CFE0000 000040 Aug 5 22:22:06.065621 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:22:06.065636 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:22:06.065646 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:22:06.065656 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Aug 5 22:22:06.065665 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Aug 5 22:22:06.065675 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Aug 5 22:22:06.065685 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Aug 5 22:22:06.065695 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Aug 5 22:22:06.065713 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Aug 5 22:22:06.065723 kernel: No NUMA configuration found Aug 5 22:22:06.065734 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Aug 5 22:22:06.065744 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Aug 5 22:22:06.065755 kernel: Zone ranges: Aug 5 22:22:06.065766 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:22:06.065777 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Aug 5 22:22:06.065791 kernel: Normal empty Aug 5 22:22:06.065802 kernel: Movable zone start for each node Aug 5 22:22:06.065812 kernel: Early memory node ranges Aug 5 22:22:06.065823 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:22:06.065833 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Aug 5 22:22:06.065844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Aug 5 22:22:06.065855 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:22:06.065866 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:22:06.065876 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Aug 5 22:22:06.065890 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 5 22:22:06.065901 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:22:06.065911 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:22:06.065922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 5 22:22:06.065932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:22:06.065942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:22:06.065952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:22:06.065963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:22:06.065973 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:22:06.065986 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:22:06.065997 kernel: TSC deadline timer available Aug 5 22:22:06.066007 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Aug 5 22:22:06.066017 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:22:06.066028 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 5 22:22:06.066038 kernel: kvm-guest: setup PV sched yield Aug 5 22:22:06.066058 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Aug 5 22:22:06.066069 kernel: Booting paravirtualized kernel on KVM Aug 5 22:22:06.066080 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:22:06.066091 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 5 22:22:06.066105 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Aug 5 22:22:06.066115 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Aug 5 22:22:06.066126 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 5 22:22:06.066136 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:22:06.066146 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:22:06.066158 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:22:06.066170 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:22:06.066180 kernel: random: crng init done Aug 5 22:22:06.066194 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:22:06.066204 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:22:06.066214 kernel: Fallback order for Node 0: 0 Aug 5 22:22:06.066225 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Aug 5 22:22:06.066235 kernel: Policy zone: DMA32 Aug 5 22:22:06.066246 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:22:06.066257 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 143044K reserved, 0K cma-reserved) Aug 5 22:22:06.066268 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:22:06.066279 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:22:06.066294 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:22:06.066304 kernel: Dynamic Preempt: voluntary Aug 5 22:22:06.066314 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:22:06.066326 kernel: rcu: RCU event tracing is enabled. Aug 5 22:22:06.066337 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:22:06.066347 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:22:06.066358 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:22:06.066368 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:22:06.066379 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:22:06.066393 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:22:06.066404 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 5 22:22:06.066414 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:22:06.066451 kernel: Console: colour VGA+ 80x25 Aug 5 22:22:06.066462 kernel: printk: console [ttyS0] enabled Aug 5 22:22:06.066473 kernel: ACPI: Core revision 20230628 Aug 5 22:22:06.066495 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 5 22:22:06.066516 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:22:06.066526 kernel: x2apic enabled Aug 5 22:22:06.066543 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:22:06.066553 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 5 22:22:06.066566 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 5 22:22:06.066577 kernel: kvm-guest: setup PV IPIs Aug 5 22:22:06.066588 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 5 22:22:06.066601 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 5 22:22:06.066613 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Aug 5 22:22:06.066625 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 5 22:22:06.066650 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 5 22:22:06.066661 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 5 22:22:06.066673 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:22:06.066685 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:22:06.066700 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:22:06.066711 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:22:06.066721 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 5 22:22:06.066732 kernel: RETBleed: Mitigation: untrained return thunk Aug 5 22:22:06.066744 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 5 22:22:06.066759 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 5 22:22:06.066770 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 5 22:22:06.066782 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 5 22:22:06.066794 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 5 22:22:06.066805 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:22:06.066817 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:22:06.066828 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:22:06.066838 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:22:06.066853 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 5 22:22:06.066864 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:22:06.066874 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:22:06.066885 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:22:06.066896 kernel: SELinux: Initializing. Aug 5 22:22:06.066907 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:22:06.066918 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:22:06.066930 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 5 22:22:06.066941 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:22:06.066955 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:22:06.066966 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:22:06.066977 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 5 22:22:06.066988 kernel: ... version: 0 Aug 5 22:22:06.066999 kernel: ... bit width: 48 Aug 5 22:22:06.067010 kernel: ... generic registers: 6 Aug 5 22:22:06.067021 kernel: ... value mask: 0000ffffffffffff Aug 5 22:22:06.067031 kernel: ... max period: 00007fffffffffff Aug 5 22:22:06.067054 kernel: ... fixed-purpose events: 0 Aug 5 22:22:06.067070 kernel: ... event mask: 000000000000003f Aug 5 22:22:06.067081 kernel: signal: max sigframe size: 1776 Aug 5 22:22:06.067092 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:22:06.067103 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:22:06.067115 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:22:06.067126 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:22:06.067137 kernel: .... node #0, CPUs: #1 #2 #3 Aug 5 22:22:06.067148 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:22:06.067158 kernel: smpboot: Max logical packages: 1 Aug 5 22:22:06.067173 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Aug 5 22:22:06.067185 kernel: devtmpfs: initialized Aug 5 22:22:06.067195 kernel: x86/mm: Memory block size: 128MB Aug 5 22:22:06.067206 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:22:06.067217 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:22:06.067228 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:22:06.067239 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:22:06.067250 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:22:06.067261 kernel: audit: type=2000 audit(1722896524.960:1): state=initialized audit_enabled=0 res=1 Aug 5 22:22:06.067275 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:22:06.067286 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:22:06.067297 kernel: cpuidle: using governor menu Aug 5 22:22:06.067309 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:22:06.067320 kernel: dca service started, version 1.12.1 Aug 5 22:22:06.067331 kernel: PCI: Using configuration type 1 for base access Aug 5 22:22:06.067342 kernel: PCI: Using configuration type 1 for extended access Aug 5 22:22:06.067353 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:22:06.067364 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:22:06.067379 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:22:06.067391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:22:06.067402 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:22:06.067415 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:22:06.067457 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:22:06.067472 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:22:06.067485 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:22:06.067499 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:22:06.067513 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:22:06.067529 kernel: ACPI: Interpreter enabled Aug 5 22:22:06.067540 kernel: ACPI: PM: (supports S0 S3 S5) Aug 5 22:22:06.067551 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:22:06.067562 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:22:06.067573 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:22:06.067584 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 5 22:22:06.067595 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:22:06.067866 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:22:06.067890 kernel: acpiphp: Slot [3] registered Aug 5 22:22:06.067902 kernel: acpiphp: Slot [4] registered Aug 5 22:22:06.067913 kernel: acpiphp: Slot [5] registered Aug 5 22:22:06.067924 kernel: acpiphp: Slot [6] registered Aug 5 22:22:06.067935 kernel: acpiphp: Slot [7] registered Aug 5 22:22:06.067946 kernel: acpiphp: Slot [8] registered Aug 5 22:22:06.067957 kernel: acpiphp: Slot [9] registered Aug 5 22:22:06.067969 kernel: acpiphp: Slot [10] registered Aug 5 22:22:06.067979 kernel: acpiphp: Slot [11] registered Aug 5 22:22:06.067994 kernel: acpiphp: Slot [12] registered Aug 5 22:22:06.068005 kernel: acpiphp: Slot [13] registered Aug 5 22:22:06.068016 kernel: acpiphp: Slot [14] registered Aug 5 22:22:06.068026 kernel: acpiphp: Slot [15] registered Aug 5 22:22:06.068037 kernel: acpiphp: Slot [16] registered Aug 5 22:22:06.068058 kernel: acpiphp: Slot [17] registered Aug 5 22:22:06.068069 kernel: acpiphp: Slot [18] registered Aug 5 22:22:06.068080 kernel: acpiphp: Slot [19] registered Aug 5 22:22:06.068091 kernel: acpiphp: Slot [20] registered Aug 5 22:22:06.068102 kernel: acpiphp: Slot [21] registered Aug 5 22:22:06.068116 kernel: acpiphp: Slot [22] registered Aug 5 22:22:06.068127 kernel: acpiphp: Slot [23] registered Aug 5 22:22:06.068138 kernel: acpiphp: Slot [24] registered Aug 5 22:22:06.068149 kernel: acpiphp: Slot [25] registered Aug 5 22:22:06.068159 kernel: acpiphp: Slot [26] registered Aug 5 22:22:06.068170 kernel: acpiphp: Slot [27] registered Aug 5 22:22:06.068181 kernel: acpiphp: Slot [28] registered Aug 5 22:22:06.068191 kernel: acpiphp: Slot [29] registered Aug 5 22:22:06.068201 kernel: acpiphp: Slot [30] registered Aug 5 22:22:06.068216 kernel: acpiphp: Slot [31] registered Aug 5 22:22:06.068227 kernel: PCI host bridge to bus 0000:00 Aug 5 22:22:06.068505 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:22:06.068700 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:22:06.068851 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:22:06.069000 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Aug 5 22:22:06.069155 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 5 22:22:06.069304 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:22:06.069561 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:22:06.069743 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:22:06.069914 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 5 22:22:06.070089 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Aug 5 22:22:06.070258 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 5 22:22:06.070441 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 5 22:22:06.070659 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 5 22:22:06.070831 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 5 22:22:06.071013 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 5 22:22:06.071193 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 5 22:22:06.071365 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 5 22:22:06.071599 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Aug 5 22:22:06.071771 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 5 22:22:06.071937 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 5 22:22:06.072113 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 5 22:22:06.072277 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:22:06.072500 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:22:06.072729 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Aug 5 22:22:06.072899 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 5 22:22:06.073084 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 5 22:22:06.073261 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 5 22:22:06.073473 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 5 22:22:06.073652 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 5 22:22:06.073817 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 5 22:22:06.073995 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Aug 5 22:22:06.074203 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Aug 5 22:22:06.074379 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Aug 5 22:22:06.074638 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Aug 5 22:22:06.074799 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 5 22:22:06.074817 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:22:06.074829 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:22:06.074840 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:22:06.074851 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:22:06.074862 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:22:06.074879 kernel: iommu: Default domain type: Translated Aug 5 22:22:06.074890 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:22:06.074901 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:22:06.074912 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:22:06.074923 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:22:06.074934 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Aug 5 22:22:06.075108 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 5 22:22:06.075273 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 5 22:22:06.075450 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:22:06.075472 kernel: vgaarb: loaded Aug 5 22:22:06.075485 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 5 22:22:06.075495 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 5 22:22:06.075502 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:22:06.075510 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:22:06.075518 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:22:06.075525 kernel: pnp: PnP ACPI init Aug 5 22:22:06.075674 kernel: pnp 00:02: [dma 2] Aug 5 22:22:06.075690 kernel: pnp: PnP ACPI: found 6 devices Aug 5 22:22:06.075698 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:22:06.075705 kernel: NET: Registered PF_INET protocol family Aug 5 22:22:06.075712 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:22:06.075720 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:22:06.075728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:22:06.075735 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:22:06.075743 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:22:06.075752 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:22:06.075766 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:22:06.075775 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:22:06.075782 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:22:06.075789 kernel: NET: Registered PF_XDP protocol family Aug 5 22:22:06.075904 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:22:06.076013 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:22:06.076149 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:22:06.076259 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Aug 5 22:22:06.076372 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 5 22:22:06.076547 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 5 22:22:06.076668 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:22:06.076679 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:22:06.076686 kernel: Initialise system trusted keyrings Aug 5 22:22:06.076694 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:22:06.076702 kernel: Key type asymmetric registered Aug 5 22:22:06.076709 kernel: Asymmetric key parser 'x509' registered Aug 5 22:22:06.076717 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:22:06.076728 kernel: io scheduler mq-deadline registered Aug 5 22:22:06.076736 kernel: io scheduler kyber registered Aug 5 22:22:06.076743 kernel: io scheduler bfq registered Aug 5 22:22:06.076750 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:22:06.076759 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 5 22:22:06.076766 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 5 22:22:06.076774 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 5 22:22:06.076781 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:22:06.076789 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:22:06.076799 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:22:06.076807 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:22:06.076814 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:22:06.076937 kernel: rtc_cmos 00:05: RTC can wake from S4 Aug 5 22:22:06.076948 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:22:06.077070 kernel: rtc_cmos 00:05: registered as rtc0 Aug 5 22:22:06.077182 kernel: rtc_cmos 00:05: setting system clock to 2024-08-05T22:22:05 UTC (1722896525) Aug 5 22:22:06.077293 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 5 22:22:06.077306 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 5 22:22:06.077314 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:22:06.077322 kernel: Segment Routing with IPv6 Aug 5 22:22:06.077329 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:22:06.077337 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:22:06.077344 kernel: Key type dns_resolver registered Aug 5 22:22:06.077351 kernel: IPI shorthand broadcast: enabled Aug 5 22:22:06.077359 kernel: sched_clock: Marking stable (1085002432, 123245501)->(1291281373, -83033440) Aug 5 22:22:06.077366 kernel: registered taskstats version 1 Aug 5 22:22:06.077376 kernel: Loading compiled-in X.509 certificates Aug 5 22:22:06.077384 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:22:06.077392 kernel: Key type .fscrypt registered Aug 5 22:22:06.077399 kernel: Key type fscrypt-provisioning registered Aug 5 22:22:06.077406 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:22:06.077414 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:22:06.077433 kernel: ima: No architecture policies found Aug 5 22:22:06.077441 kernel: clk: Disabling unused clocks Aug 5 22:22:06.077451 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:22:06.077458 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:22:06.077466 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:22:06.077473 kernel: Run /init as init process Aug 5 22:22:06.077480 kernel: with arguments: Aug 5 22:22:06.077488 kernel: /init Aug 5 22:22:06.077495 kernel: with environment: Aug 5 22:22:06.077502 kernel: HOME=/ Aug 5 22:22:06.077525 kernel: TERM=linux Aug 5 22:22:06.077535 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:22:06.077548 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:22:06.077558 systemd[1]: Detected virtualization kvm. Aug 5 22:22:06.077566 systemd[1]: Detected architecture x86-64. Aug 5 22:22:06.077574 systemd[1]: Running in initrd. Aug 5 22:22:06.077582 systemd[1]: No hostname configured, using default hostname. Aug 5 22:22:06.077590 systemd[1]: Hostname set to . Aug 5 22:22:06.077600 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:22:06.077608 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:22:06.077616 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:22:06.077624 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:22:06.077634 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:22:06.077642 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:22:06.077650 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:22:06.077659 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:22:06.077671 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:22:06.077680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:22:06.077688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:22:06.077696 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:22:06.077704 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:22:06.077712 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:22:06.077720 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:22:06.077731 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:22:06.077739 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:22:06.077747 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:22:06.077755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:22:06.077763 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:22:06.077771 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:22:06.077780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:22:06.077788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:22:06.077796 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:22:06.077806 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:22:06.077815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:22:06.077823 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:22:06.077832 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:22:06.077840 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:22:06.077850 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:22:06.077858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:22:06.077869 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:22:06.077877 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:22:06.077885 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:22:06.077917 systemd-journald[193]: Collecting audit messages is disabled. Aug 5 22:22:06.077941 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:22:06.077949 systemd-journald[193]: Journal started Aug 5 22:22:06.077970 systemd-journald[193]: Runtime Journal (/run/log/journal/2c45377dfd964ff78c640cc6e4e0c0cb) is 6.0M, max 48.4M, 42.3M free. Aug 5 22:22:06.055454 systemd-modules-load[194]: Inserted module 'overlay' Aug 5 22:22:06.101109 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:22:06.096776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:22:06.105239 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:22:06.097547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:22:06.104709 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:22:06.111974 kernel: Bridge firewalling registered Aug 5 22:22:06.110800 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:22:06.110812 systemd-modules-load[194]: Inserted module 'br_netfilter' Aug 5 22:22:06.116565 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:22:06.117756 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:22:06.122995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:22:06.132601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:22:06.136115 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:22:06.148742 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:22:06.149408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:22:06.165349 dracut-cmdline[224]: dracut-dracut-053 Aug 5 22:22:06.166053 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:22:06.169372 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:22:06.179302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:22:06.222873 systemd-resolved[239]: Positive Trust Anchors: Aug 5 22:22:06.222898 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:22:06.222946 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:22:06.226771 systemd-resolved[239]: Defaulting to hostname 'linux'. Aug 5 22:22:06.228326 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:22:06.235383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:22:06.315497 kernel: SCSI subsystem initialized Aug 5 22:22:06.332289 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:22:06.362100 kernel: iscsi: registered transport (tcp) Aug 5 22:22:06.396667 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:22:06.396754 kernel: QLogic iSCSI HBA Driver Aug 5 22:22:06.473563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:22:06.490889 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:22:06.531043 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:22:06.531144 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:22:06.531163 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:22:06.583487 kernel: raid6: avx2x4 gen() 24666 MB/s Aug 5 22:22:06.600524 kernel: raid6: avx2x2 gen() 24921 MB/s Aug 5 22:22:06.618057 kernel: raid6: avx2x1 gen() 17704 MB/s Aug 5 22:22:06.618150 kernel: raid6: using algorithm avx2x2 gen() 24921 MB/s Aug 5 22:22:06.636001 kernel: raid6: .... xor() 12923 MB/s, rmw enabled Aug 5 22:22:06.636103 kernel: raid6: using avx2x2 recovery algorithm Aug 5 22:22:06.672486 kernel: xor: automatically using best checksumming function avx Aug 5 22:22:07.008550 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:22:07.036994 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:22:07.053811 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:22:07.075202 systemd-udevd[415]: Using default interface naming scheme 'v255'. Aug 5 22:22:07.082221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:22:07.092687 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:22:07.110749 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Aug 5 22:22:07.156142 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:22:07.165748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:22:07.250087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:22:07.259513 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:22:07.275419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:22:07.278482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:22:07.278952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:22:07.279315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:22:07.289871 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:22:07.303154 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:22:07.318517 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 5 22:22:07.341409 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:22:07.341449 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:22:07.341647 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:22:07.341664 kernel: AES CTR mode by8 optimization enabled Aug 5 22:22:07.341678 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:22:07.341696 kernel: GPT:9289727 != 19775487 Aug 5 22:22:07.341710 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:22:07.341724 kernel: GPT:9289727 != 19775487 Aug 5 22:22:07.341737 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:22:07.341751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:22:07.337219 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:22:07.337410 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:22:07.349901 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:22:07.358172 kernel: libata version 3.00 loaded. Aug 5 22:22:07.358228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:22:07.358489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:22:07.366690 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 5 22:22:07.395689 kernel: scsi host0: ata_piix Aug 5 22:22:07.395938 kernel: scsi host1: ata_piix Aug 5 22:22:07.398625 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Aug 5 22:22:07.398645 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Aug 5 22:22:07.361512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:22:07.371865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:22:07.450156 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:22:07.499521 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Aug 5 22:22:07.499555 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (463) Aug 5 22:22:07.501795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:22:07.516518 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:22:07.527116 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:22:07.537663 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:22:07.538864 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:22:07.552566 kernel: ata2: found unknown device (class 0) Aug 5 22:22:07.552700 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 5 22:22:07.555928 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:22:07.562042 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 5 22:22:07.566401 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:22:07.580942 disk-uuid[542]: Primary Header is updated. Aug 5 22:22:07.580942 disk-uuid[542]: Secondary Entries is updated. Aug 5 22:22:07.580942 disk-uuid[542]: Secondary Header is updated. Aug 5 22:22:07.588060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:22:07.628358 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:22:07.682882 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 5 22:22:07.714472 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:22:07.714494 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Aug 5 22:22:08.638265 disk-uuid[544]: The operation has completed successfully. Aug 5 22:22:08.640110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:22:08.671782 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:22:08.671953 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:22:08.701845 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:22:08.707949 sh[581]: Success Aug 5 22:22:08.730516 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 5 22:22:08.808720 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:22:08.813824 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:22:08.816975 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:22:08.841066 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:22:08.841130 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:22:08.841147 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:22:08.843090 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:22:08.844008 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:22:08.865062 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:22:08.869013 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:22:08.883848 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:22:08.887670 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:22:08.902107 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:22:08.902177 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:22:08.902193 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:22:08.913491 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:22:08.933244 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:22:08.935766 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:22:08.957776 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:22:08.972258 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:22:09.175584 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:22:09.262342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:22:09.305485 systemd-networkd[764]: lo: Link UP Aug 5 22:22:09.305501 systemd-networkd[764]: lo: Gained carrier Aug 5 22:22:09.308107 systemd-networkd[764]: Enumeration completed Aug 5 22:22:09.308926 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:22:09.308931 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:22:09.312145 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:22:09.315883 systemd[1]: Reached target network.target - Network. Aug 5 22:22:09.324188 systemd-networkd[764]: eth0: Link UP Aug 5 22:22:09.324194 systemd-networkd[764]: eth0: Gained carrier Aug 5 22:22:09.324210 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:22:09.328342 ignition[683]: Ignition 2.19.0 Aug 5 22:22:09.328352 ignition[683]: Stage: fetch-offline Aug 5 22:22:09.328492 ignition[683]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:22:09.328513 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:22:09.328654 ignition[683]: parsed url from cmdline: "" Aug 5 22:22:09.328659 ignition[683]: no config URL provided Aug 5 22:22:09.328667 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:22:09.328679 ignition[683]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:22:09.328722 ignition[683]: op(1): [started] loading QEMU firmware config module Aug 5 22:22:09.328729 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:22:09.369653 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:22:09.374071 ignition[683]: op(1): [finished] loading QEMU firmware config module Aug 5 22:22:09.374120 ignition[683]: QEMU firmware config was not found. Ignoring... Aug 5 22:22:09.426979 ignition[683]: parsing config with SHA512: cc2de50274a52083b76b26ab2e1df262dd32bd8828b199da156ecc2f5864311b34080bc09710b1182184ee4f6f43094dbd4fee6fc42c5fd83b5d595911ed24ff Aug 5 22:22:09.439347 unknown[683]: fetched base config from "system" Aug 5 22:22:09.439370 unknown[683]: fetched user config from "qemu" Aug 5 22:22:09.443979 ignition[683]: fetch-offline: fetch-offline passed Aug 5 22:22:09.445463 ignition[683]: Ignition finished successfully Aug 5 22:22:09.450467 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:22:09.451711 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:22:09.457860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:22:09.487800 ignition[777]: Ignition 2.19.0 Aug 5 22:22:09.487823 ignition[777]: Stage: kargs Aug 5 22:22:09.488240 ignition[777]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:22:09.488256 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:22:09.489657 ignition[777]: kargs: kargs passed Aug 5 22:22:09.489723 ignition[777]: Ignition finished successfully Aug 5 22:22:09.495442 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:22:09.508487 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:22:09.974939 ignition[785]: Ignition 2.19.0 Aug 5 22:22:09.974962 ignition[785]: Stage: disks Aug 5 22:22:09.975301 ignition[785]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:22:09.975321 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:22:09.980077 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:22:09.977007 ignition[785]: disks: disks passed Aug 5 22:22:09.977071 ignition[785]: Ignition finished successfully Aug 5 22:22:09.990737 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:22:09.991109 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:22:09.994209 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:22:10.017553 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:22:10.025173 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:22:10.045548 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:22:10.078691 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:22:10.137889 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:22:10.163719 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:22:10.749480 kernel: EXT4-fs (vda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:22:10.755038 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:22:10.758809 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:22:10.774823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:22:10.785563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:22:10.794382 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Aug 5 22:22:10.794413 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:22:10.794472 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:22:10.794486 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:22:10.795219 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:22:10.795916 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:22:10.816116 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:22:10.795954 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:22:10.826826 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:22:10.834140 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:22:10.856016 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:22:10.941962 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:22:10.955025 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:22:10.965971 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:22:10.977792 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:22:11.212923 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:22:11.236673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:22:11.240820 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:22:11.273339 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:22:11.278068 kernel: BTRFS info (device vda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:22:11.329474 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:22:11.330132 systemd-networkd[764]: eth0: Gained IPv6LL Aug 5 22:22:11.354959 ignition[919]: INFO : Ignition 2.19.0 Aug 5 22:22:11.354959 ignition[919]: INFO : Stage: mount Aug 5 22:22:11.358147 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:22:11.358147 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:22:11.358147 ignition[919]: INFO : mount: mount passed Aug 5 22:22:11.358147 ignition[919]: INFO : Ignition finished successfully Aug 5 22:22:11.362093 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:22:11.376704 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:22:11.777007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:22:11.793650 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Aug 5 22:22:11.798482 kernel: BTRFS info (device vda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:22:11.798538 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:22:11.798553 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:22:11.811684 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:22:11.820151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:22:11.866126 ignition[947]: INFO : Ignition 2.19.0 Aug 5 22:22:11.866126 ignition[947]: INFO : Stage: files Aug 5 22:22:11.870107 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:22:11.870107 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:22:11.870107 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:22:11.870107 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:22:11.870107 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:22:11.878830 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:22:11.878830 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:22:11.884130 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:22:11.884130 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:22:11.884130 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:22:11.878880 unknown[947]: wrote ssh authorized keys file for user: core Aug 5 22:22:11.924518 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:22:12.030850 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:22:12.030850 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:22:12.037472 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:22:12.037472 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:22:12.046364 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:22:12.076134 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Aug 5 22:22:12.600829 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:22:13.083478 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:22:13.083478 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 5 22:22:13.097022 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:22:13.197888 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:22:13.211540 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:22:13.214339 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:22:13.214339 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:22:13.220749 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:22:13.220749 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:22:13.220749 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:22:13.220749 ignition[947]: INFO : files: files passed Aug 5 22:22:13.220749 ignition[947]: INFO : Ignition finished successfully Aug 5 22:22:13.228619 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:22:13.268730 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:22:13.274113 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:22:13.292203 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:22:13.292362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:22:13.310138 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:22:13.319746 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:22:13.319746 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:22:13.329409 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:22:13.331202 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:22:13.343039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:22:13.363750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:22:13.439332 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:22:13.439508 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:22:13.446059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:22:13.452042 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:22:13.454508 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:22:13.476617 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:22:13.503825 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:22:13.522709 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:22:13.541193 systemd[1]: Stopped target network.target - Network. Aug 5 22:22:13.541912 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:22:13.542327 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:22:13.547300 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:22:13.552236 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:22:13.552467 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:22:13.554526 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:22:13.554972 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:22:13.555453 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:22:13.569734 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:22:13.570284 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:22:13.571693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:22:13.581833 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:22:13.588252 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:22:13.588774 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:22:13.591481 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:22:13.592001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:22:13.592180 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:22:13.603749 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:22:13.606431 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:22:13.609890 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:22:13.611728 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:22:13.614969 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:22:13.616302 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:22:13.619204 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:22:13.620616 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:22:13.624085 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:22:13.626351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:22:13.630716 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:22:13.635807 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:22:13.638404 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:22:13.640749 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:22:13.640916 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:22:13.642317 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:22:13.642458 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:22:13.648833 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:22:13.651203 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:22:13.655833 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:22:13.656003 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:22:13.666744 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:22:13.667113 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:22:13.667284 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:22:13.677384 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:22:13.678665 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:22:13.698279 systemd-networkd[764]: eth0: DHCPv6 lease lost Aug 5 22:22:13.701649 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:22:13.704745 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:22:13.704999 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:22:13.709275 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:22:13.709461 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:22:13.715883 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:22:13.723376 ignition[1003]: INFO : Ignition 2.19.0 Aug 5 22:22:13.723376 ignition[1003]: INFO : Stage: umount Aug 5 22:22:13.723376 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:22:13.723376 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:22:13.723376 ignition[1003]: INFO : umount: umount passed Aug 5 22:22:13.723376 ignition[1003]: INFO : Ignition finished successfully Aug 5 22:22:13.716040 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:22:13.720507 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:22:13.720663 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:22:13.723221 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:22:13.726908 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:22:13.727087 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:22:13.728602 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:22:13.728837 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:22:13.745839 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:22:13.745907 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:22:13.746610 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:22:13.746688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:22:13.755360 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:22:13.755477 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:22:13.757734 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:22:13.757801 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:22:13.760483 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:22:13.760546 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:22:13.775608 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:22:13.775871 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:22:13.775948 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:22:13.776336 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:22:13.776388 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:22:13.778374 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:22:13.778480 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:22:13.784067 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:22:13.784144 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:22:13.784543 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:22:13.805261 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:22:13.805447 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:22:13.828222 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:22:13.828563 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:22:13.834439 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:22:13.834547 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:22:13.838321 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:22:13.838383 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:22:13.841108 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:22:13.841198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:22:13.850434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:22:13.850555 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:22:13.852840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:22:13.852912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:22:13.869621 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:22:13.872756 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:22:13.872918 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:22:13.875011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:22:13.875824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:22:13.894202 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:22:13.894442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:22:14.008565 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:22:14.008743 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:22:14.012521 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:22:14.014947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:22:14.015050 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:22:14.032358 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:22:14.043921 systemd[1]: Switching root. Aug 5 22:22:14.086265 systemd-journald[193]: Journal stopped Aug 5 22:22:15.788024 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Aug 5 22:22:15.788095 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:22:15.788110 kernel: SELinux: policy capability open_perms=1 Aug 5 22:22:15.788128 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:22:15.788140 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:22:15.788151 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:22:15.788163 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:22:15.788183 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:22:15.788198 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:22:15.788210 kernel: audit: type=1403 audit(1722896534.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:22:15.788222 systemd[1]: Successfully loaded SELinux policy in 62.145ms. Aug 5 22:22:15.788243 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.477ms. Aug 5 22:22:15.788256 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:22:15.788268 systemd[1]: Detected virtualization kvm. Aug 5 22:22:15.788281 systemd[1]: Detected architecture x86-64. Aug 5 22:22:15.788292 systemd[1]: Detected first boot. Aug 5 22:22:15.788307 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:22:15.788319 zram_generator::config[1048]: No configuration found. Aug 5 22:22:15.788332 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:22:15.788345 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:22:15.788361 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:22:15.788373 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:22:15.788386 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:22:15.788403 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:22:15.788448 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:22:15.788469 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:22:15.788486 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:22:15.788503 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:22:15.788525 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:22:15.788541 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:22:15.788558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:22:15.788576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:22:15.788593 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:22:15.788615 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:22:15.788633 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:22:15.788650 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:22:15.788669 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:22:15.788688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:22:15.788705 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:22:15.788723 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:22:15.788741 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:22:15.788783 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:22:15.788803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:22:15.788820 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:22:15.788850 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:22:15.788879 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:22:15.788898 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:22:15.788915 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:22:15.788931 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:22:15.788947 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:22:15.788970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:22:15.788994 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:22:15.789024 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:22:15.789043 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:22:15.789060 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:22:15.789077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:15.789095 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:22:15.789112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:22:15.789128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:22:15.789150 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:22:15.789167 systemd[1]: Reached target machines.target - Containers. Aug 5 22:22:15.789183 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:22:15.789200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:22:15.789217 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:22:15.790563 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:22:15.790609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:22:15.790627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:22:15.790655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:22:15.790672 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:22:15.790689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:22:15.790707 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:22:15.790724 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:22:15.790740 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:22:15.790768 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:22:15.790788 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:22:15.790811 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:22:15.790828 kernel: loop: module loaded Aug 5 22:22:15.790846 kernel: fuse: init (API version 7.39) Aug 5 22:22:15.790862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:22:15.790881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:22:15.790898 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:22:15.790916 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:22:15.790933 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:22:15.790950 systemd[1]: Stopped verity-setup.service. Aug 5 22:22:15.790976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:15.790997 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:22:15.791014 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:22:15.791031 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:22:15.791048 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:22:15.791067 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:22:15.791135 systemd-journald[1114]: Collecting audit messages is disabled. Aug 5 22:22:15.791172 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:22:15.791191 kernel: ACPI: bus type drm_connector registered Aug 5 22:22:15.791209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:22:15.791226 systemd-journald[1114]: Journal started Aug 5 22:22:15.791262 systemd-journald[1114]: Runtime Journal (/run/log/journal/2c45377dfd964ff78c640cc6e4e0c0cb) is 6.0M, max 48.4M, 42.3M free. Aug 5 22:22:15.427096 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:22:15.461589 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:22:15.462459 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:22:15.794767 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:22:15.798604 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:22:15.798899 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:22:15.801060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:22:15.801302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:22:15.803188 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:22:15.803443 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:22:15.805227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:22:15.805480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:22:15.811830 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:22:15.812098 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:22:15.813978 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:22:15.814251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:22:15.816700 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:22:15.818548 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:22:15.820656 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:22:15.822835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:22:15.841818 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:22:15.853675 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:22:15.857165 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:22:15.861718 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:22:15.861778 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:22:15.864305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:22:15.867504 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:22:15.870836 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:22:15.872362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:22:15.879319 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:22:15.885670 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:22:15.888434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:22:15.892030 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:22:15.894824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:22:15.897107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:22:15.900820 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:22:15.908597 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:22:15.913681 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:22:15.915976 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:22:15.918690 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:22:15.920845 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:22:15.924860 systemd-journald[1114]: Time spent on flushing to /var/log/journal/2c45377dfd964ff78c640cc6e4e0c0cb is 19.921ms for 949 entries. Aug 5 22:22:15.924860 systemd-journald[1114]: System Journal (/var/log/journal/2c45377dfd964ff78c640cc6e4e0c0cb) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:22:15.975082 systemd-journald[1114]: Received client request to flush runtime journal. Aug 5 22:22:15.975134 kernel: loop0: detected capacity change from 0 to 139760 Aug 5 22:22:15.975156 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:22:15.936974 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:22:15.956898 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:22:15.959679 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:22:15.963566 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:22:15.992813 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:22:15.995665 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:22:16.001148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:22:16.010087 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:22:16.025482 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:22:16.027637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:22:16.054277 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:22:16.057170 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:22:16.063571 kernel: loop1: detected capacity change from 0 to 211296 Aug 5 22:22:16.068115 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 5 22:22:16.068136 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Aug 5 22:22:16.077681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:22:16.108452 kernel: loop2: detected capacity change from 0 to 80568 Aug 5 22:22:16.159865 kernel: loop3: detected capacity change from 0 to 139760 Aug 5 22:22:16.218545 kernel: loop4: detected capacity change from 0 to 211296 Aug 5 22:22:16.243460 kernel: loop5: detected capacity change from 0 to 80568 Aug 5 22:22:16.257302 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:22:16.258158 (sd-merge)[1185]: Merged extensions into '/usr'. Aug 5 22:22:16.263925 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:22:16.263949 systemd[1]: Reloading... Aug 5 22:22:16.341461 zram_generator::config[1215]: No configuration found. Aug 5 22:22:16.485623 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:22:16.487794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:22:16.552758 systemd[1]: Reloading finished in 288 ms. Aug 5 22:22:16.592183 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:22:16.594327 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:22:16.608839 systemd[1]: Starting ensure-sysext.service... Aug 5 22:22:16.611499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:22:16.621311 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:22:16.621330 systemd[1]: Reloading... Aug 5 22:22:16.641300 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:22:16.642225 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:22:16.643645 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:22:16.644164 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Aug 5 22:22:16.644259 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Aug 5 22:22:16.648295 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:22:16.648461 systemd-tmpfiles[1247]: Skipping /boot Aug 5 22:22:16.663130 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:22:16.663148 systemd-tmpfiles[1247]: Skipping /boot Aug 5 22:22:16.698478 zram_generator::config[1278]: No configuration found. Aug 5 22:22:16.811174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:22:16.870651 systemd[1]: Reloading finished in 248 ms. Aug 5 22:22:16.891155 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:22:16.893230 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:22:16.911840 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:22:16.914916 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:22:16.917708 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:22:16.924036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:22:16.929704 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:22:16.936610 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:22:16.950290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:22:16.953912 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:16.954138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:22:16.955894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:22:16.959834 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:22:16.965124 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:22:16.966553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:22:16.966706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:16.973094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:22:16.973398 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:22:16.978862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:22:16.979826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:22:16.984094 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:22:16.984658 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:22:16.985826 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Aug 5 22:22:16.995531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:22:17.004736 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:22:17.018391 augenrules[1341]: No rules Aug 5 22:22:17.021707 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:22:17.026655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:17.028474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:22:17.039923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:22:17.043989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:22:17.054867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:22:17.059969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:22:17.062163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:22:17.068573 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:22:17.074519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:17.075998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:22:17.084149 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:22:17.087394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:22:17.087827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:22:17.090318 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:22:17.090644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:22:17.092682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:22:17.093033 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:22:17.095284 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:22:17.095572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:22:17.103729 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:22:17.107330 systemd[1]: Finished ensure-sysext.service. Aug 5 22:22:17.145780 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1371) Aug 5 22:22:17.154899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:22:17.156726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:22:17.156841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:22:17.160646 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:22:17.163486 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1364) Aug 5 22:22:17.166507 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:22:17.167079 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:22:17.181186 systemd-resolved[1315]: Positive Trust Anchors: Aug 5 22:22:17.181214 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:22:17.181258 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:22:17.194321 systemd-resolved[1315]: Defaulting to hostname 'linux'. Aug 5 22:22:17.196721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:22:17.215894 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:22:17.217575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:22:17.231641 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:22:17.240943 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:22:17.262545 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:22:17.280657 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 5 22:22:17.286089 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:22:17.288472 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 5 22:22:17.301355 systemd-networkd[1380]: lo: Link UP Aug 5 22:22:17.301375 systemd-networkd[1380]: lo: Gained carrier Aug 5 22:22:17.303887 systemd-networkd[1380]: Enumeration completed Aug 5 22:22:17.304461 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:22:17.304466 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:22:17.305539 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:22:17.306842 systemd-networkd[1380]: eth0: Link UP Aug 5 22:22:17.306862 systemd-networkd[1380]: eth0: Gained carrier Aug 5 22:22:17.306883 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:22:17.308436 systemd[1]: Reached target network.target - Network. Aug 5 22:22:17.318462 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 5 22:22:17.318743 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:22:17.320546 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:22:17.327544 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:22:17.329589 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:22:17.840121 systemd-resolved[1315]: Clock change detected. Flushing caches. Aug 5 22:22:17.840339 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:22:17.840438 systemd-timesyncd[1383]: Initial clock synchronization to Mon 2024-08-05 22:22:17.840048 UTC. Aug 5 22:22:17.873871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:22:17.886387 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:22:18.014192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:22:18.119494 kernel: kvm_amd: TSC scaling supported Aug 5 22:22:18.119660 kernel: kvm_amd: Nested Virtualization enabled Aug 5 22:22:18.119680 kernel: kvm_amd: Nested Paging enabled Aug 5 22:22:18.120495 kernel: kvm_amd: LBR virtualization supported Aug 5 22:22:18.120531 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 5 22:22:18.121563 kernel: kvm_amd: Virtual GIF supported Aug 5 22:22:18.188395 kernel: EDAC MC: Ver: 3.0.0 Aug 5 22:22:18.222479 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:22:18.240857 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:22:18.258597 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:22:18.305400 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:22:18.307488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:22:18.308897 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:22:18.310434 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:22:18.311887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:22:18.313432 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:22:18.314745 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:22:18.317130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:22:18.319028 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:22:18.319073 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:22:18.320280 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:22:18.322922 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:22:18.327025 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:22:18.340782 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:22:18.344883 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:22:18.346790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:22:18.348325 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:22:18.353089 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:22:18.354514 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:22:18.354550 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:22:18.356270 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:22:18.359291 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:22:18.359827 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:22:18.364589 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:22:18.372127 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:22:18.374934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:22:18.378453 jq[1417]: false Aug 5 22:22:18.376845 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:22:18.379564 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:22:18.383576 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:22:18.388297 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:22:18.397008 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:22:18.398858 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:22:18.399569 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:22:18.401960 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:22:18.407242 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:22:18.410204 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:22:18.413584 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:22:18.413891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:22:18.417222 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:22:18.417527 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:22:18.420285 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:22:18.420646 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:22:18.421754 jq[1432]: true Aug 5 22:22:18.427390 extend-filesystems[1418]: Found loop3 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found loop4 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found loop5 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found sr0 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda1 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda2 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda3 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found usr Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda4 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda6 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda7 Aug 5 22:22:18.433219 extend-filesystems[1418]: Found vda9 Aug 5 22:22:18.433219 extend-filesystems[1418]: Checking size of /dev/vda9 Aug 5 22:22:18.450476 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:22:18.474643 extend-filesystems[1418]: Resized partition /dev/vda9 Aug 5 22:22:18.446915 dbus-daemon[1416]: [system] SELinux support is enabled Aug 5 22:22:18.477393 jq[1436]: true Aug 5 22:22:18.456998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:22:18.457053 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:22:18.477909 tar[1435]: linux-amd64/helm Aug 5 22:22:18.478390 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1362) Aug 5 22:22:18.459793 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:22:18.478688 update_engine[1430]: I0805 22:22:18.470997 1430 main.cc:92] Flatcar Update Engine starting Aug 5 22:22:18.459930 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:22:18.459955 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:22:18.481652 extend-filesystems[1452]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:22:18.486412 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:22:18.487798 update_engine[1430]: I0805 22:22:18.486991 1430 update_check_scheduler.cc:74] Next update check in 9m28s Aug 5 22:22:18.496712 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:22:18.503607 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:22:18.549744 systemd-logind[1426]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:22:18.549778 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:22:18.553822 systemd-logind[1426]: New seat seat0. Aug 5 22:22:18.557322 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:22:18.566419 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:22:18.566140 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:22:18.599444 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:22:18.599444 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:22:18.599444 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:22:18.610456 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Aug 5 22:22:18.611620 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:22:18.602794 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:22:18.603058 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:22:18.611603 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:22:18.615606 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:22:18.746909 containerd[1439]: time="2024-08-05T22:22:18.746731678Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:22:18.761193 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:22:18.773867 containerd[1439]: time="2024-08-05T22:22:18.773559329Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:22:18.773867 containerd[1439]: time="2024-08-05T22:22:18.773628449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:18.775593 containerd[1439]: time="2024-08-05T22:22:18.775564690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.775682370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.775952647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.775968186Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776060239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776119099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776131483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776211603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776463856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776481780Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776491428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:18.776843 containerd[1439]: time="2024-08-05T22:22:18.776602366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:18.777083 containerd[1439]: time="2024-08-05T22:22:18.776626150Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:22:18.777083 containerd[1439]: time="2024-08-05T22:22:18.776684380Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:22:18.777083 containerd[1439]: time="2024-08-05T22:22:18.776695260Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:22:18.784701 containerd[1439]: time="2024-08-05T22:22:18.784670516Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:22:18.784787 containerd[1439]: time="2024-08-05T22:22:18.784774031Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:22:18.784836 containerd[1439]: time="2024-08-05T22:22:18.784824826Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:22:18.784921 containerd[1439]: time="2024-08-05T22:22:18.784907561Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:22:18.784973 containerd[1439]: time="2024-08-05T22:22:18.784962143Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:22:18.785018 containerd[1439]: time="2024-08-05T22:22:18.785006827Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:22:18.785065 containerd[1439]: time="2024-08-05T22:22:18.785053455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:22:18.785256 containerd[1439]: time="2024-08-05T22:22:18.785237520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:22:18.785318 containerd[1439]: time="2024-08-05T22:22:18.785305587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:22:18.785386 containerd[1439]: time="2024-08-05T22:22:18.785372743Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:22:18.785454 containerd[1439]: time="2024-08-05T22:22:18.785438797Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:22:18.785502 containerd[1439]: time="2024-08-05T22:22:18.785490915Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.785561 containerd[1439]: time="2024-08-05T22:22:18.785549385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.785622 containerd[1439]: time="2024-08-05T22:22:18.785599008Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.785672 containerd[1439]: time="2024-08-05T22:22:18.785660192Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.785723 containerd[1439]: time="2024-08-05T22:22:18.785708353Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.785814 containerd[1439]: time="2024-08-05T22:22:18.785796358Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.785879 containerd[1439]: time="2024-08-05T22:22:18.785863093Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.785934 containerd[1439]: time="2024-08-05T22:22:18.785920370Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:22:18.786122 containerd[1439]: time="2024-08-05T22:22:18.786102261Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:22:18.786637 containerd[1439]: time="2024-08-05T22:22:18.786562975Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:22:18.786691 containerd[1439]: time="2024-08-05T22:22:18.786664245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786720 containerd[1439]: time="2024-08-05T22:22:18.786687609Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:22:18.786740 containerd[1439]: time="2024-08-05T22:22:18.786724108Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:22:18.786835 containerd[1439]: time="2024-08-05T22:22:18.786815279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786886 containerd[1439]: time="2024-08-05T22:22:18.786839183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786886 containerd[1439]: time="2024-08-05T22:22:18.786861135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786886 containerd[1439]: time="2024-08-05T22:22:18.786877024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786968 containerd[1439]: time="2024-08-05T22:22:18.786895279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786968 containerd[1439]: time="2024-08-05T22:22:18.786912771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786968 containerd[1439]: time="2024-08-05T22:22:18.786928130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786968 containerd[1439]: time="2024-08-05T22:22:18.786943108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.786968 containerd[1439]: time="2024-08-05T22:22:18.786959589Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:22:18.787267 containerd[1439]: time="2024-08-05T22:22:18.787230958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.787267 containerd[1439]: time="2024-08-05T22:22:18.787262016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.787332 containerd[1439]: time="2024-08-05T22:22:18.787279820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.787332 containerd[1439]: time="2024-08-05T22:22:18.787297242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.787332 containerd[1439]: time="2024-08-05T22:22:18.787314445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.787432 containerd[1439]: time="2024-08-05T22:22:18.787333370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.787432 containerd[1439]: time="2024-08-05T22:22:18.787370450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.787432 containerd[1439]: time="2024-08-05T22:22:18.787401438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:22:18.788046 containerd[1439]: time="2024-08-05T22:22:18.787800346Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:22:18.788046 containerd[1439]: time="2024-08-05T22:22:18.787886427Z" level=info msg="Connect containerd service" Aug 5 22:22:18.788046 containerd[1439]: time="2024-08-05T22:22:18.787923327Z" level=info msg="using legacy CRI server" Aug 5 22:22:18.788046 containerd[1439]: time="2024-08-05T22:22:18.787932494Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:22:18.788046 containerd[1439]: time="2024-08-05T22:22:18.788024096Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:22:18.789185 containerd[1439]: time="2024-08-05T22:22:18.789145088Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:22:18.789236 containerd[1439]: time="2024-08-05T22:22:18.789194320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:22:18.789236 containerd[1439]: time="2024-08-05T22:22:18.789217484Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:22:18.789236 containerd[1439]: time="2024-08-05T22:22:18.789232321Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:22:18.789316 containerd[1439]: time="2024-08-05T22:22:18.789248221Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:22:18.789629 containerd[1439]: time="2024-08-05T22:22:18.789471650Z" level=info msg="Start subscribing containerd event" Aug 5 22:22:18.789629 containerd[1439]: time="2024-08-05T22:22:18.789536402Z" level=info msg="Start recovering state" Aug 5 22:22:18.789629 containerd[1439]: time="2024-08-05T22:22:18.789562150Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:22:18.789701 containerd[1439]: time="2024-08-05T22:22:18.789650967Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:22:18.790280 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:22:18.792549 containerd[1439]: time="2024-08-05T22:22:18.791065008Z" level=info msg="Start event monitor" Aug 5 22:22:18.792549 containerd[1439]: time="2024-08-05T22:22:18.791105174Z" level=info msg="Start snapshots syncer" Aug 5 22:22:18.792549 containerd[1439]: time="2024-08-05T22:22:18.791129149Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:22:18.792549 containerd[1439]: time="2024-08-05T22:22:18.791142634Z" level=info msg="Start streaming server" Aug 5 22:22:18.792549 containerd[1439]: time="2024-08-05T22:22:18.791244806Z" level=info msg="containerd successfully booted in 0.047412s" Aug 5 22:22:18.792083 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:22:18.807774 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:22:18.815901 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:22:18.816192 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:22:18.820337 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:22:18.855529 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:22:18.863704 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:22:18.866928 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:22:18.868365 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:22:18.982319 tar[1435]: linux-amd64/LICENSE Aug 5 22:22:18.982319 tar[1435]: linux-amd64/README.md Aug 5 22:22:18.997965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:22:19.385667 systemd-networkd[1380]: eth0: Gained IPv6LL Aug 5 22:22:19.390211 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:22:19.392469 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:22:19.404776 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:22:19.408916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:19.412919 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:22:19.439112 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:22:19.439631 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:22:19.441879 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:22:19.444804 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:22:20.102413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:20.104232 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:22:20.106681 systemd[1]: Startup finished in 1.237s (kernel) + 8.732s (initrd) + 5.100s (userspace) = 15.070s. Aug 5 22:22:20.126069 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:20.663323 kubelet[1528]: E0805 22:22:20.663204 1528 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:20.668250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:20.668522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:20.668920 systemd[1]: kubelet.service: Consumed 1.077s CPU time. Aug 5 22:22:22.263385 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:22:22.264695 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:35718.service - OpenSSH per-connection server daemon (10.0.0.1:35718). Aug 5 22:22:22.311224 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 35718 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:22:22.313421 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:22.323381 systemd-logind[1426]: New session 1 of user core. Aug 5 22:22:22.324796 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:22:22.344806 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:22:22.358053 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:22:22.371593 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:22:22.374865 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:22.480824 systemd[1546]: Queued start job for default target default.target. Aug 5 22:22:22.490740 systemd[1546]: Created slice app.slice - User Application Slice. Aug 5 22:22:22.490768 systemd[1546]: Reached target paths.target - Paths. Aug 5 22:22:22.490783 systemd[1546]: Reached target timers.target - Timers. Aug 5 22:22:22.492393 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:22:22.504962 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:22:22.505125 systemd[1546]: Reached target sockets.target - Sockets. Aug 5 22:22:22.505145 systemd[1546]: Reached target basic.target - Basic System. Aug 5 22:22:22.505197 systemd[1546]: Reached target default.target - Main User Target. Aug 5 22:22:22.505235 systemd[1546]: Startup finished in 123ms. Aug 5 22:22:22.505545 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:22:22.507275 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:22:22.571362 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:35732.service - OpenSSH per-connection server daemon (10.0.0.1:35732). Aug 5 22:22:22.623272 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 35732 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:22:22.625304 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:22.629840 systemd-logind[1426]: New session 2 of user core. Aug 5 22:22:22.639556 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:22:22.697698 sshd[1557]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:22.714456 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:35732.service: Deactivated successfully. Aug 5 22:22:22.716291 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:22:22.718038 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:22:22.719339 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:35742.service - OpenSSH per-connection server daemon (10.0.0.1:35742). Aug 5 22:22:22.720147 systemd-logind[1426]: Removed session 2. Aug 5 22:22:22.753709 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 35742 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:22:22.755186 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:22.759190 systemd-logind[1426]: New session 3 of user core. Aug 5 22:22:22.773471 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:22:22.825911 sshd[1564]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:22.843292 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:35742.service: Deactivated successfully. Aug 5 22:22:22.844965 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:22:22.846440 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:22:22.857693 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:35748.service - OpenSSH per-connection server daemon (10.0.0.1:35748). Aug 5 22:22:22.858834 systemd-logind[1426]: Removed session 3. Aug 5 22:22:22.887668 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 35748 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:22:22.889221 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:22.893572 systemd-logind[1426]: New session 4 of user core. Aug 5 22:22:22.903470 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:22:22.959979 sshd[1571]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:22.972710 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:35748.service: Deactivated successfully. Aug 5 22:22:22.974743 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:22:22.976474 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:22:22.996886 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:35760.service - OpenSSH per-connection server daemon (10.0.0.1:35760). Aug 5 22:22:22.998036 systemd-logind[1426]: Removed session 4. Aug 5 22:22:23.028753 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 35760 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:22:23.030416 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:23.035027 systemd-logind[1426]: New session 5 of user core. Aug 5 22:22:23.044675 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:22:23.103682 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:22:23.103983 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:22:23.120638 sudo[1582]: pam_unix(sudo:session): session closed for user root Aug 5 22:22:23.122736 sshd[1578]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:23.134529 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:35760.service: Deactivated successfully. Aug 5 22:22:23.136288 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:22:23.138321 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:22:23.151667 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:35764.service - OpenSSH per-connection server daemon (10.0.0.1:35764). Aug 5 22:22:23.152816 systemd-logind[1426]: Removed session 5. Aug 5 22:22:23.181812 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 35764 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:22:23.183446 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:23.187599 systemd-logind[1426]: New session 6 of user core. Aug 5 22:22:23.197500 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:22:23.252599 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:22:23.252929 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:22:23.257008 sudo[1591]: pam_unix(sudo:session): session closed for user root Aug 5 22:22:23.264701 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:22:23.265110 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:22:23.285673 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:22:23.287579 auditctl[1594]: No rules Aug 5 22:22:23.288010 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:22:23.288231 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:22:23.291226 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:22:23.322103 augenrules[1612]: No rules Aug 5 22:22:23.323978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:22:23.325442 sudo[1590]: pam_unix(sudo:session): session closed for user root Aug 5 22:22:23.327306 sshd[1587]: pam_unix(sshd:session): session closed for user core Aug 5 22:22:23.342525 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:35764.service: Deactivated successfully. Aug 5 22:22:23.344432 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:22:23.345941 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:22:23.350716 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:35778.service - OpenSSH per-connection server daemon (10.0.0.1:35778). Aug 5 22:22:23.351738 systemd-logind[1426]: Removed session 6. Aug 5 22:22:23.381922 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 35778 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:22:23.383638 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:23.387934 systemd-logind[1426]: New session 7 of user core. Aug 5 22:22:23.403772 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:22:23.457896 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:22:23.458183 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:22:23.565678 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:22:23.565824 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:22:23.803538 dockerd[1634]: time="2024-08-05T22:22:23.803372564Z" level=info msg="Starting up" Aug 5 22:22:25.122583 dockerd[1634]: time="2024-08-05T22:22:25.122526559Z" level=info msg="Loading containers: start." Aug 5 22:22:25.405381 kernel: Initializing XFRM netlink socket Aug 5 22:22:25.497376 systemd-networkd[1380]: docker0: Link UP Aug 5 22:22:26.512604 dockerd[1634]: time="2024-08-05T22:22:26.512554838Z" level=info msg="Loading containers: done." Aug 5 22:22:26.610580 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck388447137-merged.mount: Deactivated successfully. Aug 5 22:22:26.655009 dockerd[1634]: time="2024-08-05T22:22:26.654936284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:22:26.655182 dockerd[1634]: time="2024-08-05T22:22:26.655156747Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:22:26.655323 dockerd[1634]: time="2024-08-05T22:22:26.655292953Z" level=info msg="Daemon has completed initialization" Aug 5 22:22:26.814545 dockerd[1634]: time="2024-08-05T22:22:26.813756729Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:22:26.814004 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:22:27.713864 containerd[1439]: time="2024-08-05T22:22:27.713796044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 22:22:28.685494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177829386.mount: Deactivated successfully. Aug 5 22:22:30.333501 containerd[1439]: time="2024-08-05T22:22:30.333423641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:30.334275 containerd[1439]: time="2024-08-05T22:22:30.334182264Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=35232396" Aug 5 22:22:30.335569 containerd[1439]: time="2024-08-05T22:22:30.335515624Z" level=info msg="ImageCreate event name:\"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:30.338788 containerd[1439]: time="2024-08-05T22:22:30.338744479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:30.340055 containerd[1439]: time="2024-08-05T22:22:30.340009201Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"35229196\" in 2.626153094s" Aug 5 22:22:30.340096 containerd[1439]: time="2024-08-05T22:22:30.340063473Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\"" Aug 5 22:22:30.390669 containerd[1439]: time="2024-08-05T22:22:30.390594467Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 22:22:30.918775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:22:30.936623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:31.097833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:31.223873 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:31.861672 kubelet[1845]: E0805 22:22:31.861618 1845 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:31.871587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:31.871820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:33.584340 containerd[1439]: time="2024-08-05T22:22:33.584237497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:33.599428 containerd[1439]: time="2024-08-05T22:22:33.599317137Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=32204824" Aug 5 22:22:33.632138 containerd[1439]: time="2024-08-05T22:22:33.632076472Z" level=info msg="ImageCreate event name:\"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:33.663856 containerd[1439]: time="2024-08-05T22:22:33.663780829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:33.664804 containerd[1439]: time="2024-08-05T22:22:33.664768462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"33754770\" in 3.274118681s" Aug 5 22:22:33.664804 containerd[1439]: time="2024-08-05T22:22:33.664801293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\"" Aug 5 22:22:33.695444 containerd[1439]: time="2024-08-05T22:22:33.695393244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 22:22:35.722578 containerd[1439]: time="2024-08-05T22:22:35.722527755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:35.723211 containerd[1439]: time="2024-08-05T22:22:35.723173927Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=17320803" Aug 5 22:22:35.724223 containerd[1439]: time="2024-08-05T22:22:35.724196204Z" level=info msg="ImageCreate event name:\"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:35.727861 containerd[1439]: time="2024-08-05T22:22:35.727819328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:35.728842 containerd[1439]: time="2024-08-05T22:22:35.728806259Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"18870767\" in 2.033361719s" Aug 5 22:22:35.728842 containerd[1439]: time="2024-08-05T22:22:35.728835985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\"" Aug 5 22:22:35.749958 containerd[1439]: time="2024-08-05T22:22:35.749915376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 22:22:36.944137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2086184698.mount: Deactivated successfully. Aug 5 22:22:37.624036 containerd[1439]: time="2024-08-05T22:22:37.623947996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:37.626053 containerd[1439]: time="2024-08-05T22:22:37.626000215Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=28600088" Aug 5 22:22:37.627451 containerd[1439]: time="2024-08-05T22:22:37.627411181Z" level=info msg="ImageCreate event name:\"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:37.629816 containerd[1439]: time="2024-08-05T22:22:37.629738486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:37.630295 containerd[1439]: time="2024-08-05T22:22:37.630253582Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"28599107\" in 1.880297799s" Aug 5 22:22:37.630295 containerd[1439]: time="2024-08-05T22:22:37.630281925Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\"" Aug 5 22:22:37.660742 containerd[1439]: time="2024-08-05T22:22:37.660684460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:22:38.235428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299966933.mount: Deactivated successfully. Aug 5 22:22:41.485115 containerd[1439]: time="2024-08-05T22:22:41.485035444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:41.486368 containerd[1439]: time="2024-08-05T22:22:41.486268205Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Aug 5 22:22:41.488008 containerd[1439]: time="2024-08-05T22:22:41.487961341Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:41.491665 containerd[1439]: time="2024-08-05T22:22:41.491619842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:41.492696 containerd[1439]: time="2024-08-05T22:22:41.492646146Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.831916722s" Aug 5 22:22:41.492696 containerd[1439]: time="2024-08-05T22:22:41.492694898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:22:41.521138 containerd[1439]: time="2024-08-05T22:22:41.521082204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:22:42.094695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:22:42.102771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:42.104604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657636599.mount: Deactivated successfully. Aug 5 22:22:42.108456 containerd[1439]: time="2024-08-05T22:22:42.108337788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:42.109392 containerd[1439]: time="2024-08-05T22:22:42.109291216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:22:42.111297 containerd[1439]: time="2024-08-05T22:22:42.111256011Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:42.113714 containerd[1439]: time="2024-08-05T22:22:42.113676841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:42.114389 containerd[1439]: time="2024-08-05T22:22:42.114341908Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 593.215081ms" Aug 5 22:22:42.114437 containerd[1439]: time="2024-08-05T22:22:42.114394196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:22:42.143191 containerd[1439]: time="2024-08-05T22:22:42.143146807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:22:42.255424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:42.260154 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:42.444424 kubelet[1957]: E0805 22:22:42.442789 1957 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:42.449774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:42.450011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:44.121896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322738404.mount: Deactivated successfully. Aug 5 22:22:48.107337 containerd[1439]: time="2024-08-05T22:22:48.107248707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:48.108199 containerd[1439]: time="2024-08-05T22:22:48.108113499Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 5 22:22:48.109671 containerd[1439]: time="2024-08-05T22:22:48.109624142Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:48.113286 containerd[1439]: time="2024-08-05T22:22:48.113245934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:22:48.114701 containerd[1439]: time="2024-08-05T22:22:48.114662952Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 5.971480407s" Aug 5 22:22:48.114701 containerd[1439]: time="2024-08-05T22:22:48.114696434Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:22:50.996059 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:51.009580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:51.027601 systemd[1]: Reloading requested from client PID 2093 ('systemctl') (unit session-7.scope)... Aug 5 22:22:51.027617 systemd[1]: Reloading... Aug 5 22:22:51.113407 zram_generator::config[2129]: No configuration found. Aug 5 22:22:51.406043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:22:51.520425 systemd[1]: Reloading finished in 492 ms. Aug 5 22:22:51.578904 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:22:51.579005 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:22:51.579315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:51.581246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:51.741358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:51.747059 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:22:51.790551 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:22:51.790551 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:22:51.790551 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:22:51.791959 kubelet[2178]: I0805 22:22:51.791898 2178 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:22:52.049445 kubelet[2178]: I0805 22:22:52.049275 2178 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:22:52.049445 kubelet[2178]: I0805 22:22:52.049327 2178 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:22:52.049630 kubelet[2178]: I0805 22:22:52.049610 2178 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:22:52.070884 kubelet[2178]: E0805 22:22:52.070828 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.073144 kubelet[2178]: I0805 22:22:52.073094 2178 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:22:52.088862 kubelet[2178]: I0805 22:22:52.088810 2178 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:22:52.089094 kubelet[2178]: I0805 22:22:52.089064 2178 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:22:52.089257 kubelet[2178]: I0805 22:22:52.089232 2178 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:22:52.089919 kubelet[2178]: I0805 22:22:52.089889 2178 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:22:52.089919 kubelet[2178]: I0805 22:22:52.089909 2178 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:22:52.091450 kubelet[2178]: I0805 22:22:52.091422 2178 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:22:52.091555 kubelet[2178]: I0805 22:22:52.091530 2178 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:22:52.091555 kubelet[2178]: I0805 22:22:52.091552 2178 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:22:52.091604 kubelet[2178]: I0805 22:22:52.091587 2178 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:22:52.091604 kubelet[2178]: I0805 22:22:52.091598 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:22:52.092320 kubelet[2178]: W0805 22:22:52.092235 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.092320 kubelet[2178]: E0805 22:22:52.092299 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.092320 kubelet[2178]: W0805 22:22:52.092292 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.092438 kubelet[2178]: E0805 22:22:52.092328 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.093475 kubelet[2178]: I0805 22:22:52.092987 2178 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:22:52.096198 kubelet[2178]: I0805 22:22:52.096172 2178 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:22:52.096270 kubelet[2178]: W0805 22:22:52.096249 2178 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:22:52.101253 kubelet[2178]: I0805 22:22:52.099140 2178 server.go:1256] "Started kubelet" Aug 5 22:22:52.101253 kubelet[2178]: I0805 22:22:52.099713 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:22:52.101253 kubelet[2178]: I0805 22:22:52.100101 2178 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:22:52.101253 kubelet[2178]: I0805 22:22:52.100156 2178 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:22:52.101495 kubelet[2178]: I0805 22:22:52.101472 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:22:52.101825 kubelet[2178]: I0805 22:22:52.101807 2178 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:22:52.103424 kubelet[2178]: I0805 22:22:52.103396 2178 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:22:52.103590 kubelet[2178]: I0805 22:22:52.103568 2178 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:22:52.103646 kubelet[2178]: I0805 22:22:52.103630 2178 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:22:52.107723 kubelet[2178]: W0805 22:22:52.107627 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.107723 kubelet[2178]: E0805 22:22:52.107715 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.108524 kubelet[2178]: E0805 22:22:52.108495 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Aug 5 22:22:52.108705 kubelet[2178]: I0805 22:22:52.108580 2178 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:22:52.108756 kubelet[2178]: I0805 22:22:52.108708 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:22:52.110149 kubelet[2178]: E0805 22:22:52.110121 2178 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:22:52.110538 kubelet[2178]: I0805 22:22:52.110497 2178 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:22:52.111823 kubelet[2178]: E0805 22:22:52.111796 2178 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17e8f54dd4b484c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-08-05 22:22:52.099101897 +0000 UTC m=+0.347490758,LastTimestamp:2024-08-05 22:22:52.099101897 +0000 UTC m=+0.347490758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 5 22:22:52.123986 kubelet[2178]: I0805 22:22:52.123947 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:22:52.125966 kubelet[2178]: I0805 22:22:52.125913 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:22:52.125966 kubelet[2178]: I0805 22:22:52.125966 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:22:52.126132 kubelet[2178]: I0805 22:22:52.125996 2178 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:22:52.126132 kubelet[2178]: E0805 22:22:52.126051 2178 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:22:52.126751 kubelet[2178]: W0805 22:22:52.126680 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.126751 kubelet[2178]: E0805 22:22:52.126736 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:52.128745 kubelet[2178]: I0805 22:22:52.128717 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:22:52.128745 kubelet[2178]: I0805 22:22:52.128739 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:22:52.128829 kubelet[2178]: I0805 22:22:52.128766 2178 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:22:52.205642 kubelet[2178]: I0805 22:22:52.205589 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:22:52.206095 kubelet[2178]: E0805 22:22:52.206061 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Aug 5 22:22:52.226395 kubelet[2178]: E0805 22:22:52.226278 2178 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:22:52.309482 kubelet[2178]: E0805 22:22:52.309323 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Aug 5 22:22:52.408163 kubelet[2178]: I0805 22:22:52.408115 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:22:52.408500 kubelet[2178]: E0805 22:22:52.408483 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Aug 5 22:22:52.426824 kubelet[2178]: E0805 22:22:52.426736 2178 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:22:52.486198 kubelet[2178]: I0805 22:22:52.486118 2178 policy_none.go:49] "None policy: Start" Aug 5 22:22:52.487068 kubelet[2178]: I0805 22:22:52.487049 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:22:52.487118 kubelet[2178]: I0805 22:22:52.487072 2178 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:22:52.496225 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:22:52.519365 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:22:52.523509 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:22:52.548013 kubelet[2178]: I0805 22:22:52.547913 2178 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:22:52.548331 kubelet[2178]: I0805 22:22:52.548303 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:22:52.550117 kubelet[2178]: E0805 22:22:52.549954 2178 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:22:52.710842 kubelet[2178]: E0805 22:22:52.710699 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Aug 5 22:22:52.810592 kubelet[2178]: I0805 22:22:52.810540 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:22:52.811159 kubelet[2178]: E0805 22:22:52.810983 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Aug 5 22:22:52.827293 kubelet[2178]: I0805 22:22:52.827196 2178 topology_manager.go:215] "Topology Admit Handler" podUID="165949b7f95bc54fc237af9397bb9a2b" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:22:52.828535 kubelet[2178]: I0805 22:22:52.828480 2178 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:22:52.829423 kubelet[2178]: I0805 22:22:52.829394 2178 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:22:52.835614 systemd[1]: Created slice kubepods-burstable-pod165949b7f95bc54fc237af9397bb9a2b.slice - libcontainer container kubepods-burstable-pod165949b7f95bc54fc237af9397bb9a2b.slice. Aug 5 22:22:52.863588 systemd[1]: Created slice kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice - libcontainer container kubepods-burstable-pod088f5b844ad7241e38f298babde6e061.slice. Aug 5 22:22:52.879871 systemd[1]: Created slice kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice - libcontainer container kubepods-burstable-podcb686d9581fc5af7d1cc8e14735ce3db.slice. Aug 5 22:22:52.907415 kubelet[2178]: I0805 22:22:52.907333 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:22:52.907571 kubelet[2178]: I0805 22:22:52.907478 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:22:52.907571 kubelet[2178]: I0805 22:22:52.907527 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/165949b7f95bc54fc237af9397bb9a2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"165949b7f95bc54fc237af9397bb9a2b\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:22:52.907659 kubelet[2178]: I0805 22:22:52.907592 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/165949b7f95bc54fc237af9397bb9a2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"165949b7f95bc54fc237af9397bb9a2b\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:22:52.907659 kubelet[2178]: I0805 22:22:52.907621 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/165949b7f95bc54fc237af9397bb9a2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"165949b7f95bc54fc237af9397bb9a2b\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:22:52.907659 kubelet[2178]: I0805 22:22:52.907650 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:22:52.907778 kubelet[2178]: I0805 22:22:52.907692 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:22:52.907778 kubelet[2178]: I0805 22:22:52.907716 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:22:52.907778 kubelet[2178]: I0805 22:22:52.907739 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:22:53.161503 kubelet[2178]: E0805 22:22:53.161341 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:53.162256 containerd[1439]: time="2024-08-05T22:22:53.162212674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:165949b7f95bc54fc237af9397bb9a2b,Namespace:kube-system,Attempt:0,}" Aug 5 22:22:53.177481 kubelet[2178]: E0805 22:22:53.177448 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:53.178187 containerd[1439]: time="2024-08-05T22:22:53.177919299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,}" Aug 5 22:22:53.183261 kubelet[2178]: E0805 22:22:53.183242 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:53.183649 containerd[1439]: time="2024-08-05T22:22:53.183614593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,}" Aug 5 22:22:53.408529 kubelet[2178]: W0805 22:22:53.408438 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.408529 kubelet[2178]: E0805 22:22:53.408528 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.500489 kubelet[2178]: W0805 22:22:53.500311 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.500489 kubelet[2178]: E0805 22:22:53.500387 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.500489 kubelet[2178]: W0805 22:22:53.500425 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.500736 kubelet[2178]: E0805 22:22:53.500501 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.511499 kubelet[2178]: E0805 22:22:53.511458 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Aug 5 22:22:53.552152 kubelet[2178]: W0805 22:22:53.552104 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.552152 kubelet[2178]: E0805 22:22:53.552152 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:53.612981 kubelet[2178]: I0805 22:22:53.612947 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:22:53.613300 kubelet[2178]: E0805 22:22:53.613275 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Aug 5 22:22:54.139524 kubelet[2178]: E0805 22:22:54.139471 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:54.618449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983578581.mount: Deactivated successfully. Aug 5 22:22:54.801766 containerd[1439]: time="2024-08-05T22:22:54.801659211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:22:54.816824 containerd[1439]: time="2024-08-05T22:22:54.816719814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:22:54.832970 containerd[1439]: time="2024-08-05T22:22:54.832644658Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:22:54.843751 containerd[1439]: time="2024-08-05T22:22:54.843636888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:22:54.854386 containerd[1439]: time="2024-08-05T22:22:54.854309739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:22:54.859881 containerd[1439]: time="2024-08-05T22:22:54.859825457Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:22:54.872543 containerd[1439]: time="2024-08-05T22:22:54.872405851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:22:54.891105 containerd[1439]: time="2024-08-05T22:22:54.891044520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:22:54.891776 containerd[1439]: time="2024-08-05T22:22:54.891743014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.713728223s" Aug 5 22:22:54.892451 containerd[1439]: time="2024-08-05T22:22:54.892414737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.730069799s" Aug 5 22:22:54.924019 containerd[1439]: time="2024-08-05T22:22:54.923956968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.740241693s" Aug 5 22:22:55.112843 kubelet[2178]: E0805 22:22:55.112800 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="3.2s" Aug 5 22:22:55.215139 kubelet[2178]: I0805 22:22:55.215012 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:22:55.215531 kubelet[2178]: E0805 22:22:55.215429 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Aug 5 22:22:55.421939 kubelet[2178]: W0805 22:22:55.421866 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:55.421939 kubelet[2178]: E0805 22:22:55.421910 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:55.621549 kubelet[2178]: W0805 22:22:55.621431 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:55.621549 kubelet[2178]: E0805 22:22:55.621473 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:55.623852 kubelet[2178]: W0805 22:22:55.623808 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:55.623907 kubelet[2178]: E0805 22:22:55.623857 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:55.664949 containerd[1439]: time="2024-08-05T22:22:55.664830017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:55.664949 containerd[1439]: time="2024-08-05T22:22:55.664894921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:55.664949 containerd[1439]: time="2024-08-05T22:22:55.664917444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:55.664949 containerd[1439]: time="2024-08-05T22:22:55.664932643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:55.665857 containerd[1439]: time="2024-08-05T22:22:55.665724263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:55.665857 containerd[1439]: time="2024-08-05T22:22:55.665786773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:55.665857 containerd[1439]: time="2024-08-05T22:22:55.665807472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:55.665857 containerd[1439]: time="2024-08-05T22:22:55.665822181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:55.674023 containerd[1439]: time="2024-08-05T22:22:55.673870328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:22:55.674023 containerd[1439]: time="2024-08-05T22:22:55.673926396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:55.674023 containerd[1439]: time="2024-08-05T22:22:55.673942076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:22:55.674023 containerd[1439]: time="2024-08-05T22:22:55.673952455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:22:55.688525 systemd[1]: Started cri-containerd-a135979a9e25878dfb1be7d5b2d5fa9157f04577c5c1699956aa237680d89e1d.scope - libcontainer container a135979a9e25878dfb1be7d5b2d5fa9157f04577c5c1699956aa237680d89e1d. Aug 5 22:22:55.693862 systemd[1]: Started cri-containerd-2a7ac8f02138aca2183d40882321cc8e6c325dbe7d8c488ed2bc2bc7a034952c.scope - libcontainer container 2a7ac8f02138aca2183d40882321cc8e6c325dbe7d8c488ed2bc2bc7a034952c. Aug 5 22:22:55.696218 systemd[1]: Started cri-containerd-d9faab22cb3d0829ce3c3200a560e769162b1b760ffe4526484d5ef41ca9a293.scope - libcontainer container d9faab22cb3d0829ce3c3200a560e769162b1b760ffe4526484d5ef41ca9a293. Aug 5 22:22:55.739387 containerd[1439]: time="2024-08-05T22:22:55.739320265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cb686d9581fc5af7d1cc8e14735ce3db,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9faab22cb3d0829ce3c3200a560e769162b1b760ffe4526484d5ef41ca9a293\"" Aug 5 22:22:55.739387 containerd[1439]: time="2024-08-05T22:22:55.739318502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:088f5b844ad7241e38f298babde6e061,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a7ac8f02138aca2183d40882321cc8e6c325dbe7d8c488ed2bc2bc7a034952c\"" Aug 5 22:22:55.740727 kubelet[2178]: E0805 22:22:55.740708 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:55.741232 kubelet[2178]: E0805 22:22:55.740925 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:55.743463 containerd[1439]: time="2024-08-05T22:22:55.743421302Z" level=info msg="CreateContainer within sandbox \"2a7ac8f02138aca2183d40882321cc8e6c325dbe7d8c488ed2bc2bc7a034952c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:22:55.743581 containerd[1439]: time="2024-08-05T22:22:55.743432283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:165949b7f95bc54fc237af9397bb9a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a135979a9e25878dfb1be7d5b2d5fa9157f04577c5c1699956aa237680d89e1d\"" Aug 5 22:22:55.744260 kubelet[2178]: E0805 22:22:55.744073 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:55.744301 containerd[1439]: time="2024-08-05T22:22:55.744172505Z" level=info msg="CreateContainer within sandbox \"d9faab22cb3d0829ce3c3200a560e769162b1b760ffe4526484d5ef41ca9a293\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:22:55.746894 containerd[1439]: time="2024-08-05T22:22:55.746873921Z" level=info msg="CreateContainer within sandbox \"a135979a9e25878dfb1be7d5b2d5fa9157f04577c5c1699956aa237680d89e1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:22:55.923256 kubelet[2178]: W0805 22:22:55.923106 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:55.923256 kubelet[2178]: E0805 22:22:55.923176 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused Aug 5 22:22:56.554599 containerd[1439]: time="2024-08-05T22:22:56.554539595Z" level=info msg="CreateContainer within sandbox \"2a7ac8f02138aca2183d40882321cc8e6c325dbe7d8c488ed2bc2bc7a034952c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"803dbbb46649b09379ec75158bd732c2128f19ecbb0440899b0d36acf81808ca\"" Aug 5 22:22:56.555154 containerd[1439]: time="2024-08-05T22:22:56.555127967Z" level=info msg="StartContainer for \"803dbbb46649b09379ec75158bd732c2128f19ecbb0440899b0d36acf81808ca\"" Aug 5 22:22:56.586483 systemd[1]: Started cri-containerd-803dbbb46649b09379ec75158bd732c2128f19ecbb0440899b0d36acf81808ca.scope - libcontainer container 803dbbb46649b09379ec75158bd732c2128f19ecbb0440899b0d36acf81808ca. Aug 5 22:22:56.813211 containerd[1439]: time="2024-08-05T22:22:56.812860904Z" level=info msg="CreateContainer within sandbox \"d9faab22cb3d0829ce3c3200a560e769162b1b760ffe4526484d5ef41ca9a293\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4824c2e4c648a43e3342ae8e2300ab051b768e00ba219120b6e3fc3ff6e18cf\"" Aug 5 22:22:56.813211 containerd[1439]: time="2024-08-05T22:22:56.812902003Z" level=info msg="StartContainer for \"803dbbb46649b09379ec75158bd732c2128f19ecbb0440899b0d36acf81808ca\" returns successfully" Aug 5 22:22:56.813612 containerd[1439]: time="2024-08-05T22:22:56.813589754Z" level=info msg="StartContainer for \"f4824c2e4c648a43e3342ae8e2300ab051b768e00ba219120b6e3fc3ff6e18cf\"" Aug 5 22:22:56.847478 systemd[1]: Started cri-containerd-f4824c2e4c648a43e3342ae8e2300ab051b768e00ba219120b6e3fc3ff6e18cf.scope - libcontainer container f4824c2e4c648a43e3342ae8e2300ab051b768e00ba219120b6e3fc3ff6e18cf. Aug 5 22:22:57.044997 containerd[1439]: time="2024-08-05T22:22:57.044928732Z" level=info msg="StartContainer for \"f4824c2e4c648a43e3342ae8e2300ab051b768e00ba219120b6e3fc3ff6e18cf\" returns successfully" Aug 5 22:22:57.045120 containerd[1439]: time="2024-08-05T22:22:57.045027720Z" level=info msg="CreateContainer within sandbox \"a135979a9e25878dfb1be7d5b2d5fa9157f04577c5c1699956aa237680d89e1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d1a954a1c50eb8cf9b822bbdd2ac5ceb5317f49c68aaa812cb1bd692b1c3e78c\"" Aug 5 22:22:57.045780 containerd[1439]: time="2024-08-05T22:22:57.045747421Z" level=info msg="StartContainer for \"d1a954a1c50eb8cf9b822bbdd2ac5ceb5317f49c68aaa812cb1bd692b1c3e78c\"" Aug 5 22:22:57.083498 systemd[1]: Started cri-containerd-d1a954a1c50eb8cf9b822bbdd2ac5ceb5317f49c68aaa812cb1bd692b1c3e78c.scope - libcontainer container d1a954a1c50eb8cf9b822bbdd2ac5ceb5317f49c68aaa812cb1bd692b1c3e78c. Aug 5 22:22:57.244495 containerd[1439]: time="2024-08-05T22:22:57.243771939Z" level=info msg="StartContainer for \"d1a954a1c50eb8cf9b822bbdd2ac5ceb5317f49c68aaa812cb1bd692b1c3e78c\" returns successfully" Aug 5 22:22:57.248099 kubelet[2178]: E0805 22:22:57.248016 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:57.253822 kubelet[2178]: E0805 22:22:57.253500 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:57.254394 kubelet[2178]: E0805 22:22:57.254311 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:58.256078 kubelet[2178]: E0805 22:22:58.256039 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:58.257602 kubelet[2178]: E0805 22:22:58.256664 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:58.257602 kubelet[2178]: E0805 22:22:58.257549 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:58.316082 kubelet[2178]: E0805 22:22:58.316032 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 5 22:22:58.416757 kubelet[2178]: I0805 22:22:58.416713 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:22:58.420196 kubelet[2178]: I0805 22:22:58.420175 2178 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:22:58.426191 kubelet[2178]: E0805 22:22:58.426166 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:58.526952 kubelet[2178]: E0805 22:22:58.526812 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:58.627861 kubelet[2178]: E0805 22:22:58.627809 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:58.728515 kubelet[2178]: E0805 22:22:58.728448 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:58.829140 kubelet[2178]: E0805 22:22:58.829006 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:58.929650 kubelet[2178]: E0805 22:22:58.929601 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.030422 kubelet[2178]: E0805 22:22:59.030377 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.131584 kubelet[2178]: E0805 22:22:59.131445 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.232344 kubelet[2178]: E0805 22:22:59.232297 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.257582 kubelet[2178]: E0805 22:22:59.257340 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:59.258068 kubelet[2178]: E0805 22:22:59.257709 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:59.258642 kubelet[2178]: E0805 22:22:59.258625 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:22:59.332842 kubelet[2178]: E0805 22:22:59.332775 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.433569 kubelet[2178]: E0805 22:22:59.433411 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.534193 kubelet[2178]: E0805 22:22:59.534125 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.635142 kubelet[2178]: E0805 22:22:59.635075 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.735945 kubelet[2178]: E0805 22:22:59.735791 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.836460 kubelet[2178]: E0805 22:22:59.836395 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:22:59.936972 kubelet[2178]: E0805 22:22:59.936902 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.037590 kubelet[2178]: E0805 22:23:00.037439 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.137641 kubelet[2178]: E0805 22:23:00.137569 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.238555 kubelet[2178]: E0805 22:23:00.238499 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.339548 kubelet[2178]: E0805 22:23:00.339288 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.379038 kubelet[2178]: E0805 22:23:00.378985 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:00.440214 kubelet[2178]: E0805 22:23:00.440131 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.540939 kubelet[2178]: E0805 22:23:00.540861 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.642275 kubelet[2178]: E0805 22:23:00.641443 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.742033 kubelet[2178]: E0805 22:23:00.741958 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.843232 kubelet[2178]: E0805 22:23:00.843160 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.943699 systemd[1]: Reloading requested from client PID 2459 ('systemctl') (unit session-7.scope)... Aug 5 22:23:00.947095 kubelet[2178]: E0805 22:23:00.945204 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:00.943723 systemd[1]: Reloading... Aug 5 22:23:01.046385 kubelet[2178]: E0805 22:23:01.046043 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:01.057233 zram_generator::config[2496]: No configuration found. Aug 5 22:23:01.146636 kubelet[2178]: E0805 22:23:01.146402 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:01.234109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:23:01.247663 kubelet[2178]: E0805 22:23:01.247579 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:01.348705 kubelet[2178]: E0805 22:23:01.348609 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:23:01.366751 systemd[1]: Reloading finished in 422 ms. Aug 5 22:23:01.421323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:01.445730 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:23:01.446137 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:01.459794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:01.645222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:01.665142 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:23:01.729443 kubelet[2541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:23:01.729443 kubelet[2541]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:23:01.729443 kubelet[2541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:23:01.729443 kubelet[2541]: I0805 22:23:01.729007 2541 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:23:01.734626 kubelet[2541]: I0805 22:23:01.734571 2541 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:23:01.734626 kubelet[2541]: I0805 22:23:01.734608 2541 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:23:01.734976 kubelet[2541]: I0805 22:23:01.734947 2541 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:23:01.737041 kubelet[2541]: I0805 22:23:01.736989 2541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:23:01.740379 kubelet[2541]: I0805 22:23:01.739611 2541 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:23:01.752958 kubelet[2541]: I0805 22:23:01.752909 2541 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:23:01.753314 kubelet[2541]: I0805 22:23:01.753275 2541 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:23:01.753652 kubelet[2541]: I0805 22:23:01.753602 2541 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:23:01.753652 kubelet[2541]: I0805 22:23:01.753647 2541 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:23:01.753794 kubelet[2541]: I0805 22:23:01.753675 2541 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:23:01.753794 kubelet[2541]: I0805 22:23:01.753720 2541 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:23:01.753888 kubelet[2541]: I0805 22:23:01.753874 2541 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:23:01.753931 kubelet[2541]: I0805 22:23:01.753896 2541 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:23:01.753953 kubelet[2541]: I0805 22:23:01.753930 2541 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:23:01.753977 kubelet[2541]: I0805 22:23:01.753951 2541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:23:01.754785 kubelet[2541]: I0805 22:23:01.754641 2541 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:23:01.755030 kubelet[2541]: I0805 22:23:01.754981 2541 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:23:01.758227 kubelet[2541]: I0805 22:23:01.757488 2541 server.go:1256] "Started kubelet" Aug 5 22:23:01.758227 kubelet[2541]: I0805 22:23:01.757653 2541 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:23:01.758227 kubelet[2541]: I0805 22:23:01.757795 2541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:23:01.758227 kubelet[2541]: I0805 22:23:01.758112 2541 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:23:01.758659 kubelet[2541]: I0805 22:23:01.758629 2541 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:23:01.766087 kubelet[2541]: I0805 22:23:01.766005 2541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:23:01.767623 kubelet[2541]: E0805 22:23:01.767408 2541 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:23:01.768001 kubelet[2541]: I0805 22:23:01.767973 2541 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:23:01.768541 kubelet[2541]: I0805 22:23:01.768132 2541 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:23:01.768541 kubelet[2541]: I0805 22:23:01.768308 2541 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:23:01.775122 kubelet[2541]: I0805 22:23:01.775080 2541 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:23:01.775308 kubelet[2541]: I0805 22:23:01.775266 2541 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:23:01.778783 kubelet[2541]: I0805 22:23:01.778642 2541 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:23:01.789426 kubelet[2541]: I0805 22:23:01.789372 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:23:01.790977 kubelet[2541]: I0805 22:23:01.790941 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:23:01.790977 kubelet[2541]: I0805 22:23:01.790972 2541 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:23:01.791062 kubelet[2541]: I0805 22:23:01.791050 2541 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:23:01.791147 kubelet[2541]: E0805 22:23:01.791119 2541 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:23:01.816502 kubelet[2541]: I0805 22:23:01.816472 2541 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:23:01.816502 kubelet[2541]: I0805 22:23:01.816493 2541 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:23:01.816502 kubelet[2541]: I0805 22:23:01.816509 2541 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:23:01.816726 kubelet[2541]: I0805 22:23:01.816693 2541 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:23:01.816726 kubelet[2541]: I0805 22:23:01.816713 2541 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:23:01.816726 kubelet[2541]: I0805 22:23:01.816720 2541 policy_none.go:49] "None policy: Start" Aug 5 22:23:01.817222 kubelet[2541]: I0805 22:23:01.817204 2541 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:23:01.817274 kubelet[2541]: I0805 22:23:01.817226 2541 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:23:01.817399 kubelet[2541]: I0805 22:23:01.817386 2541 state_mem.go:75] "Updated machine memory state" Aug 5 22:23:01.823527 kubelet[2541]: I0805 22:23:01.823494 2541 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:23:01.823821 kubelet[2541]: I0805 22:23:01.823803 2541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:23:01.874042 kubelet[2541]: I0805 22:23:01.873992 2541 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Aug 5 22:23:01.885523 kubelet[2541]: I0805 22:23:01.885342 2541 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Aug 5 22:23:01.885523 kubelet[2541]: I0805 22:23:01.885471 2541 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Aug 5 22:23:01.893065 kubelet[2541]: I0805 22:23:01.892077 2541 topology_manager.go:215] "Topology Admit Handler" podUID="cb686d9581fc5af7d1cc8e14735ce3db" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:23:01.893065 kubelet[2541]: I0805 22:23:01.892195 2541 topology_manager.go:215] "Topology Admit Handler" podUID="165949b7f95bc54fc237af9397bb9a2b" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:23:01.893065 kubelet[2541]: I0805 22:23:01.892253 2541 topology_manager.go:215] "Topology Admit Handler" podUID="088f5b844ad7241e38f298babde6e061" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:23:02.069247 kubelet[2541]: I0805 22:23:02.069181 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/165949b7f95bc54fc237af9397bb9a2b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"165949b7f95bc54fc237af9397bb9a2b\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:23:02.069247 kubelet[2541]: I0805 22:23:02.069255 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:23:02.069502 kubelet[2541]: I0805 22:23:02.069392 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:23:02.069502 kubelet[2541]: I0805 22:23:02.069465 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:23:02.069577 kubelet[2541]: I0805 22:23:02.069515 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:23:02.069577 kubelet[2541]: I0805 22:23:02.069546 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb686d9581fc5af7d1cc8e14735ce3db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cb686d9581fc5af7d1cc8e14735ce3db\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:23:02.069649 kubelet[2541]: I0805 22:23:02.069618 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/165949b7f95bc54fc237af9397bb9a2b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"165949b7f95bc54fc237af9397bb9a2b\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:23:02.069726 kubelet[2541]: I0805 22:23:02.069652 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/165949b7f95bc54fc237af9397bb9a2b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"165949b7f95bc54fc237af9397bb9a2b\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:23:02.069726 kubelet[2541]: I0805 22:23:02.069693 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/088f5b844ad7241e38f298babde6e061-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"088f5b844ad7241e38f298babde6e061\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:23:02.205431 kubelet[2541]: E0805 22:23:02.205384 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:02.206130 kubelet[2541]: E0805 22:23:02.206095 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:02.206211 kubelet[2541]: E0805 22:23:02.206103 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:02.754298 kubelet[2541]: I0805 22:23:02.754238 2541 apiserver.go:52] "Watching apiserver" Aug 5 22:23:02.770758 kubelet[2541]: I0805 22:23:02.769066 2541 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:23:02.805294 kubelet[2541]: E0805 22:23:02.805244 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:02.806380 kubelet[2541]: E0805 22:23:02.806343 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:02.814636 kubelet[2541]: E0805 22:23:02.814396 2541 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:23:02.815122 kubelet[2541]: E0805 22:23:02.814975 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:02.841408 kubelet[2541]: I0805 22:23:02.840400 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.840330676 podStartE2EDuration="1.840330676s" podCreationTimestamp="2024-08-05 22:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:23:02.836737473 +0000 UTC m=+1.165475011" watchObservedRunningTime="2024-08-05 22:23:02.840330676 +0000 UTC m=+1.169068215" Aug 5 22:23:02.843754 kubelet[2541]: I0805 22:23:02.843708 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.843635192 podStartE2EDuration="1.843635192s" podCreationTimestamp="2024-08-05 22:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:23:02.843082235 +0000 UTC m=+1.171819773" watchObservedRunningTime="2024-08-05 22:23:02.843635192 +0000 UTC m=+1.172372731" Aug 5 22:23:02.852556 kubelet[2541]: I0805 22:23:02.852515 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8524550789999998 podStartE2EDuration="1.852455079s" podCreationTimestamp="2024-08-05 22:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:23:02.852383983 +0000 UTC m=+1.181121521" watchObservedRunningTime="2024-08-05 22:23:02.852455079 +0000 UTC m=+1.181192617" Aug 5 22:23:03.806593 kubelet[2541]: E0805 22:23:03.806554 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:04.038901 update_engine[1430]: I0805 22:23:04.038825 1430 update_attempter.cc:509] Updating boot flags... Aug 5 22:23:04.072402 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2614) Aug 5 22:23:04.119503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2613) Aug 5 22:23:04.149402 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2613) Aug 5 22:23:04.808470 kubelet[2541]: E0805 22:23:04.808428 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:06.044575 sudo[1623]: pam_unix(sudo:session): session closed for user root Aug 5 22:23:06.047226 sshd[1620]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:06.052064 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:35778.service: Deactivated successfully. Aug 5 22:23:06.054207 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:23:06.054426 systemd[1]: session-7.scope: Consumed 5.386s CPU time, 146.2M memory peak, 0B memory swap peak. Aug 5 22:23:06.055023 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:23:06.056059 systemd-logind[1426]: Removed session 7. Aug 5 22:23:07.771700 kubelet[2541]: E0805 22:23:07.768488 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:07.829903 kubelet[2541]: E0805 22:23:07.829662 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:11.537715 kubelet[2541]: E0805 22:23:11.536151 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:11.855999 kubelet[2541]: E0805 22:23:11.855837 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:13.450710 kubelet[2541]: E0805 22:23:13.450655 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:13.458253 kubelet[2541]: I0805 22:23:13.458215 2541 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:23:13.458677 containerd[1439]: time="2024-08-05T22:23:13.458629018Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:23:13.459224 kubelet[2541]: I0805 22:23:13.458960 2541 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:23:14.153242 kubelet[2541]: I0805 22:23:14.153165 2541 topology_manager.go:215] "Topology Admit Handler" podUID="6685fc7e-3f03-4c87-85a5-c721da825830" podNamespace="kube-system" podName="kube-proxy-9qpfx" Aug 5 22:23:14.175889 systemd[1]: Created slice kubepods-besteffort-pod6685fc7e_3f03_4c87_85a5_c721da825830.slice - libcontainer container kubepods-besteffort-pod6685fc7e_3f03_4c87_85a5_c721da825830.slice. Aug 5 22:23:14.181990 kubelet[2541]: I0805 22:23:14.180007 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmf8k\" (UniqueName: \"kubernetes.io/projected/6685fc7e-3f03-4c87-85a5-c721da825830-kube-api-access-xmf8k\") pod \"kube-proxy-9qpfx\" (UID: \"6685fc7e-3f03-4c87-85a5-c721da825830\") " pod="kube-system/kube-proxy-9qpfx" Aug 5 22:23:14.181990 kubelet[2541]: I0805 22:23:14.180058 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6685fc7e-3f03-4c87-85a5-c721da825830-kube-proxy\") pod \"kube-proxy-9qpfx\" (UID: \"6685fc7e-3f03-4c87-85a5-c721da825830\") " pod="kube-system/kube-proxy-9qpfx" Aug 5 22:23:14.181990 kubelet[2541]: I0805 22:23:14.180093 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6685fc7e-3f03-4c87-85a5-c721da825830-xtables-lock\") pod \"kube-proxy-9qpfx\" (UID: \"6685fc7e-3f03-4c87-85a5-c721da825830\") " pod="kube-system/kube-proxy-9qpfx" Aug 5 22:23:14.181990 kubelet[2541]: I0805 22:23:14.180119 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6685fc7e-3f03-4c87-85a5-c721da825830-lib-modules\") pod \"kube-proxy-9qpfx\" (UID: \"6685fc7e-3f03-4c87-85a5-c721da825830\") " pod="kube-system/kube-proxy-9qpfx" Aug 5 22:23:14.311489 kubelet[2541]: E0805 22:23:14.310738 2541 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:23:14.311489 kubelet[2541]: E0805 22:23:14.310777 2541 projected.go:200] Error preparing data for projected volume kube-api-access-xmf8k for pod kube-system/kube-proxy-9qpfx: configmap "kube-root-ca.crt" not found Aug 5 22:23:14.311489 kubelet[2541]: E0805 22:23:14.310847 2541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6685fc7e-3f03-4c87-85a5-c721da825830-kube-api-access-xmf8k podName:6685fc7e-3f03-4c87-85a5-c721da825830 nodeName:}" failed. No retries permitted until 2024-08-05 22:23:14.810821547 +0000 UTC m=+13.139559085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xmf8k" (UniqueName: "kubernetes.io/projected/6685fc7e-3f03-4c87-85a5-c721da825830-kube-api-access-xmf8k") pod "kube-proxy-9qpfx" (UID: "6685fc7e-3f03-4c87-85a5-c721da825830") : configmap "kube-root-ca.crt" not found Aug 5 22:23:14.684471 kubelet[2541]: I0805 22:23:14.680740 2541 topology_manager.go:215] "Topology Admit Handler" podUID="01889ebe-9865-4a5e-ac17-ec3e2b896218" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-z89vj" Aug 5 22:23:14.694980 kubelet[2541]: I0805 22:23:14.690112 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01889ebe-9865-4a5e-ac17-ec3e2b896218-var-lib-calico\") pod \"tigera-operator-76c4974c85-z89vj\" (UID: \"01889ebe-9865-4a5e-ac17-ec3e2b896218\") " pod="tigera-operator/tigera-operator-76c4974c85-z89vj" Aug 5 22:23:14.694980 kubelet[2541]: I0805 22:23:14.690222 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbk6g\" (UniqueName: \"kubernetes.io/projected/01889ebe-9865-4a5e-ac17-ec3e2b896218-kube-api-access-tbk6g\") pod \"tigera-operator-76c4974c85-z89vj\" (UID: \"01889ebe-9865-4a5e-ac17-ec3e2b896218\") " pod="tigera-operator/tigera-operator-76c4974c85-z89vj" Aug 5 22:23:14.707153 systemd[1]: Created slice kubepods-besteffort-pod01889ebe_9865_4a5e_ac17_ec3e2b896218.slice - libcontainer container kubepods-besteffort-pod01889ebe_9865_4a5e_ac17_ec3e2b896218.slice. Aug 5 22:23:15.016960 containerd[1439]: time="2024-08-05T22:23:15.016285290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-z89vj,Uid:01889ebe-9865-4a5e-ac17-ec3e2b896218,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:23:15.104094 kubelet[2541]: E0805 22:23:15.101081 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:15.105870 containerd[1439]: time="2024-08-05T22:23:15.105417962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qpfx,Uid:6685fc7e-3f03-4c87-85a5-c721da825830,Namespace:kube-system,Attempt:0,}" Aug 5 22:23:15.111052 containerd[1439]: time="2024-08-05T22:23:15.108792231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:23:15.111052 containerd[1439]: time="2024-08-05T22:23:15.108894263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:15.111052 containerd[1439]: time="2024-08-05T22:23:15.108922796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:23:15.111052 containerd[1439]: time="2024-08-05T22:23:15.108941932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:15.197788 containerd[1439]: time="2024-08-05T22:23:15.197437666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:23:15.197788 containerd[1439]: time="2024-08-05T22:23:15.197528798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:15.197788 containerd[1439]: time="2024-08-05T22:23:15.197559475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:23:15.197788 containerd[1439]: time="2024-08-05T22:23:15.197578761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:15.226933 systemd[1]: Started cri-containerd-a48173089daa03a9165e99ce53067be4f6e443f88d7d3d92167d6333fac9e539.scope - libcontainer container a48173089daa03a9165e99ce53067be4f6e443f88d7d3d92167d6333fac9e539. Aug 5 22:23:15.243134 systemd[1]: Started cri-containerd-7982df8a70eb8250ab35ca47e1b801789c8969681e17beca97f00bc7c851da90.scope - libcontainer container 7982df8a70eb8250ab35ca47e1b801789c8969681e17beca97f00bc7c851da90. Aug 5 22:23:15.308084 containerd[1439]: time="2024-08-05T22:23:15.307477436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qpfx,Uid:6685fc7e-3f03-4c87-85a5-c721da825830,Namespace:kube-system,Attempt:0,} returns sandbox id \"7982df8a70eb8250ab35ca47e1b801789c8969681e17beca97f00bc7c851da90\"" Aug 5 22:23:15.308383 kubelet[2541]: E0805 22:23:15.308345 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:15.313063 containerd[1439]: time="2024-08-05T22:23:15.313003950Z" level=info msg="CreateContainer within sandbox \"7982df8a70eb8250ab35ca47e1b801789c8969681e17beca97f00bc7c851da90\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:23:15.345836 containerd[1439]: time="2024-08-05T22:23:15.345640001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-z89vj,Uid:01889ebe-9865-4a5e-ac17-ec3e2b896218,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a48173089daa03a9165e99ce53067be4f6e443f88d7d3d92167d6333fac9e539\"" Aug 5 22:23:15.355814 containerd[1439]: time="2024-08-05T22:23:15.355753027Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:23:15.371550 containerd[1439]: time="2024-08-05T22:23:15.371178164Z" level=info msg="CreateContainer within sandbox \"7982df8a70eb8250ab35ca47e1b801789c8969681e17beca97f00bc7c851da90\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e316229bcda069bc0f1c40307e4f35a20600cccdc6bcd2978a4b5467b4b1cbb9\"" Aug 5 22:23:15.372090 containerd[1439]: time="2024-08-05T22:23:15.372054877Z" level=info msg="StartContainer for \"e316229bcda069bc0f1c40307e4f35a20600cccdc6bcd2978a4b5467b4b1cbb9\"" Aug 5 22:23:15.466664 systemd[1]: Started cri-containerd-e316229bcda069bc0f1c40307e4f35a20600cccdc6bcd2978a4b5467b4b1cbb9.scope - libcontainer container e316229bcda069bc0f1c40307e4f35a20600cccdc6bcd2978a4b5467b4b1cbb9. Aug 5 22:23:15.586130 containerd[1439]: time="2024-08-05T22:23:15.585968528Z" level=info msg="StartContainer for \"e316229bcda069bc0f1c40307e4f35a20600cccdc6bcd2978a4b5467b4b1cbb9\" returns successfully" Aug 5 22:23:15.914401 kubelet[2541]: E0805 22:23:15.911207 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:17.060079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264924894.mount: Deactivated successfully. Aug 5 22:23:19.208386 containerd[1439]: time="2024-08-05T22:23:19.208286261Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:19.274448 containerd[1439]: time="2024-08-05T22:23:19.274304460Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076052" Aug 5 22:23:19.331298 containerd[1439]: time="2024-08-05T22:23:19.329898157Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:19.353933 containerd[1439]: time="2024-08-05T22:23:19.353844132Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:19.354960 containerd[1439]: time="2024-08-05T22:23:19.354881545Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.999066s" Aug 5 22:23:19.354960 containerd[1439]: time="2024-08-05T22:23:19.354944233Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:23:19.360334 containerd[1439]: time="2024-08-05T22:23:19.360274977Z" level=info msg="CreateContainer within sandbox \"a48173089daa03a9165e99ce53067be4f6e443f88d7d3d92167d6333fac9e539\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:23:19.440169 containerd[1439]: time="2024-08-05T22:23:19.440075180Z" level=info msg="CreateContainer within sandbox \"a48173089daa03a9165e99ce53067be4f6e443f88d7d3d92167d6333fac9e539\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7ab3e81bb615e827b6941a2ce2491640785270bbbeba9ade255035b971b87982\"" Aug 5 22:23:19.442689 containerd[1439]: time="2024-08-05T22:23:19.442621983Z" level=info msg="StartContainer for \"7ab3e81bb615e827b6941a2ce2491640785270bbbeba9ade255035b971b87982\"" Aug 5 22:23:19.542013 systemd[1]: Started cri-containerd-7ab3e81bb615e827b6941a2ce2491640785270bbbeba9ade255035b971b87982.scope - libcontainer container 7ab3e81bb615e827b6941a2ce2491640785270bbbeba9ade255035b971b87982. Aug 5 22:23:19.603271 containerd[1439]: time="2024-08-05T22:23:19.603137360Z" level=info msg="StartContainer for \"7ab3e81bb615e827b6941a2ce2491640785270bbbeba9ade255035b971b87982\" returns successfully" Aug 5 22:23:19.959800 kubelet[2541]: I0805 22:23:19.959562 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9qpfx" podStartSLOduration=5.9594956119999996 podStartE2EDuration="5.959495612s" podCreationTimestamp="2024-08-05 22:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:23:15.941157391 +0000 UTC m=+14.269894929" watchObservedRunningTime="2024-08-05 22:23:19.959495612 +0000 UTC m=+18.288233150" Aug 5 22:23:21.846435 kubelet[2541]: I0805 22:23:21.845924 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-z89vj" podStartSLOduration=3.83896909 podStartE2EDuration="7.845815209s" podCreationTimestamp="2024-08-05 22:23:14 +0000 UTC" firstStartedPulling="2024-08-05 22:23:15.349148381 +0000 UTC m=+13.677885919" lastFinishedPulling="2024-08-05 22:23:19.3559945 +0000 UTC m=+17.684732038" observedRunningTime="2024-08-05 22:23:19.95978138 +0000 UTC m=+18.288518918" watchObservedRunningTime="2024-08-05 22:23:21.845815209 +0000 UTC m=+20.174552777" Aug 5 22:23:23.441417 kubelet[2541]: I0805 22:23:23.433608 2541 topology_manager.go:215] "Topology Admit Handler" podUID="10c170b9-351f-4375-8c86-4c0b3f21ad5a" podNamespace="calico-system" podName="calico-typha-7fb684bff6-5jtlz" Aug 5 22:23:23.476188 systemd[1]: Created slice kubepods-besteffort-pod10c170b9_351f_4375_8c86_4c0b3f21ad5a.slice - libcontainer container kubepods-besteffort-pod10c170b9_351f_4375_8c86_4c0b3f21ad5a.slice. Aug 5 22:23:23.587392 kubelet[2541]: I0805 22:23:23.587306 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c170b9-351f-4375-8c86-4c0b3f21ad5a-tigera-ca-bundle\") pod \"calico-typha-7fb684bff6-5jtlz\" (UID: \"10c170b9-351f-4375-8c86-4c0b3f21ad5a\") " pod="calico-system/calico-typha-7fb684bff6-5jtlz" Aug 5 22:23:23.587392 kubelet[2541]: I0805 22:23:23.587399 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5z8p\" (UniqueName: \"kubernetes.io/projected/10c170b9-351f-4375-8c86-4c0b3f21ad5a-kube-api-access-t5z8p\") pod \"calico-typha-7fb684bff6-5jtlz\" (UID: \"10c170b9-351f-4375-8c86-4c0b3f21ad5a\") " pod="calico-system/calico-typha-7fb684bff6-5jtlz" Aug 5 22:23:23.587619 kubelet[2541]: I0805 22:23:23.587437 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/10c170b9-351f-4375-8c86-4c0b3f21ad5a-typha-certs\") pod \"calico-typha-7fb684bff6-5jtlz\" (UID: \"10c170b9-351f-4375-8c86-4c0b3f21ad5a\") " pod="calico-system/calico-typha-7fb684bff6-5jtlz" Aug 5 22:23:23.658903 kubelet[2541]: I0805 22:23:23.658799 2541 topology_manager.go:215] "Topology Admit Handler" podUID="58dfb7c8-d7cf-4823-ad2e-c7aa63a51548" podNamespace="calico-system" podName="calico-node-hmnj4" Aug 5 22:23:23.688289 systemd[1]: Created slice kubepods-besteffort-pod58dfb7c8_d7cf_4823_ad2e_c7aa63a51548.slice - libcontainer container kubepods-besteffort-pod58dfb7c8_d7cf_4823_ad2e_c7aa63a51548.slice. Aug 5 22:23:23.780145 kubelet[2541]: E0805 22:23:23.780013 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:23.781530 containerd[1439]: time="2024-08-05T22:23:23.781494997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb684bff6-5jtlz,Uid:10c170b9-351f-4375-8c86-4c0b3f21ad5a,Namespace:calico-system,Attempt:0,}" Aug 5 22:23:23.788252 kubelet[2541]: I0805 22:23:23.787862 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-xtables-lock\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.788252 kubelet[2541]: I0805 22:23:23.787918 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-var-lib-calico\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.788252 kubelet[2541]: I0805 22:23:23.787945 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmb4z\" (UniqueName: \"kubernetes.io/projected/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-kube-api-access-tmb4z\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.788252 kubelet[2541]: I0805 22:23:23.788030 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-policysync\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.788252 kubelet[2541]: I0805 22:23:23.788095 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-node-certs\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.789261 kubelet[2541]: I0805 22:23:23.788120 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-cni-log-dir\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.789261 kubelet[2541]: I0805 22:23:23.788173 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-lib-modules\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.789261 kubelet[2541]: I0805 22:23:23.788209 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-var-run-calico\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.789261 kubelet[2541]: I0805 22:23:23.788390 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-cni-net-dir\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.789261 kubelet[2541]: I0805 22:23:23.788420 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-tigera-ca-bundle\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.793697 kubelet[2541]: I0805 22:23:23.788560 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-cni-bin-dir\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.793697 kubelet[2541]: I0805 22:23:23.788803 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/58dfb7c8-d7cf-4823-ad2e-c7aa63a51548-flexvol-driver-host\") pod \"calico-node-hmnj4\" (UID: \"58dfb7c8-d7cf-4823-ad2e-c7aa63a51548\") " pod="calico-system/calico-node-hmnj4" Aug 5 22:23:23.822421 kubelet[2541]: I0805 22:23:23.822318 2541 topology_manager.go:215] "Topology Admit Handler" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" podNamespace="calico-system" podName="csi-node-driver-2bspg" Aug 5 22:23:23.822720 kubelet[2541]: E0805 22:23:23.822686 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:23.916916 containerd[1439]: time="2024-08-05T22:23:23.915576262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:23:23.916916 containerd[1439]: time="2024-08-05T22:23:23.915668695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:23.916916 containerd[1439]: time="2024-08-05T22:23:23.915693241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:23:23.916916 containerd[1439]: time="2024-08-05T22:23:23.915710635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:23.957159 kubelet[2541]: E0805 22:23:23.956466 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:23.957159 kubelet[2541]: W0805 22:23:23.956521 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:23.957159 kubelet[2541]: E0805 22:23:23.956560 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:23.992751 kubelet[2541]: E0805 22:23:23.992690 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:23.992751 kubelet[2541]: W0805 22:23:23.992720 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:23.992751 kubelet[2541]: E0805 22:23:23.992753 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:23.993044 kubelet[2541]: I0805 22:23:23.992794 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eeaa21b6-36ba-499f-950d-4a8d4e76928e-socket-dir\") pod \"csi-node-driver-2bspg\" (UID: \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\") " pod="calico-system/csi-node-driver-2bspg" Aug 5 22:23:23.995058 kubelet[2541]: E0805 22:23:23.994646 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:23.995058 kubelet[2541]: W0805 22:23:23.994665 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:23.995058 kubelet[2541]: E0805 22:23:23.994686 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:23.995058 kubelet[2541]: I0805 22:23:23.994720 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eeaa21b6-36ba-499f-950d-4a8d4e76928e-varrun\") pod \"csi-node-driver-2bspg\" (UID: \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\") " pod="calico-system/csi-node-driver-2bspg" Aug 5 22:23:23.995207 systemd[1]: Started cri-containerd-5a92bd7b2b1cb391151f4e6cfffe3a1d2b5c8be57868dc8a0cedb1f19ee7ff32.scope - libcontainer container 5a92bd7b2b1cb391151f4e6cfffe3a1d2b5c8be57868dc8a0cedb1f19ee7ff32. Aug 5 22:23:23.997709 kubelet[2541]: E0805 22:23:23.997223 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:23.997709 kubelet[2541]: W0805 22:23:23.997241 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:23.997709 kubelet[2541]: E0805 22:23:23.997445 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:23.997709 kubelet[2541]: I0805 22:23:23.997490 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeaa21b6-36ba-499f-950d-4a8d4e76928e-kubelet-dir\") pod \"csi-node-driver-2bspg\" (UID: \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\") " pod="calico-system/csi-node-driver-2bspg" Aug 5 22:23:24.010012 kubelet[2541]: E0805 22:23:24.009862 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.010012 kubelet[2541]: W0805 22:23:24.009892 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.017240 kubelet[2541]: E0805 22:23:24.011171 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.017240 kubelet[2541]: E0805 22:23:24.013205 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.017240 kubelet[2541]: W0805 22:23:24.014416 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.017240 kubelet[2541]: E0805 22:23:24.014475 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.020886 kubelet[2541]: E0805 22:23:24.020023 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.020886 kubelet[2541]: W0805 22:23:24.020089 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.020886 kubelet[2541]: E0805 22:23:24.020135 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.020886 kubelet[2541]: I0805 22:23:24.020200 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eeaa21b6-36ba-499f-950d-4a8d4e76928e-registration-dir\") pod \"csi-node-driver-2bspg\" (UID: \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\") " pod="calico-system/csi-node-driver-2bspg" Aug 5 22:23:24.021422 kubelet[2541]: E0805 22:23:24.021407 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.021500 kubelet[2541]: W0805 22:23:24.021486 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.021588 kubelet[2541]: E0805 22:23:24.021574 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.022555 kubelet[2541]: E0805 22:23:24.022538 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.022648 kubelet[2541]: W0805 22:23:24.022633 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.022738 kubelet[2541]: E0805 22:23:24.022723 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.023418 kubelet[2541]: E0805 22:23:24.023401 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.023505 kubelet[2541]: W0805 22:23:24.023488 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.023600 kubelet[2541]: E0805 22:23:24.023586 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.024747 kubelet[2541]: E0805 22:23:24.024728 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.024840 kubelet[2541]: W0805 22:23:24.024826 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.024916 kubelet[2541]: E0805 22:23:24.024901 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.025191 kubelet[2541]: E0805 22:23:24.025176 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:24.025967 kubelet[2541]: E0805 22:23:24.025950 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.026070 kubelet[2541]: W0805 22:23:24.026053 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.026152 kubelet[2541]: E0805 22:23:24.026139 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.026582 kubelet[2541]: E0805 22:23:24.026564 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.026658 kubelet[2541]: W0805 22:23:24.026644 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.026853 kubelet[2541]: E0805 22:23:24.026735 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.028381 containerd[1439]: time="2024-08-05T22:23:24.027761005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hmnj4,Uid:58dfb7c8-d7cf-4823-ad2e-c7aa63a51548,Namespace:calico-system,Attempt:0,}" Aug 5 22:23:24.030950 kubelet[2541]: E0805 22:23:24.029707 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.030950 kubelet[2541]: W0805 22:23:24.029744 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.030950 kubelet[2541]: E0805 22:23:24.029799 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.030950 kubelet[2541]: I0805 22:23:24.029849 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9dg\" (UniqueName: \"kubernetes.io/projected/eeaa21b6-36ba-499f-950d-4a8d4e76928e-kube-api-access-lx9dg\") pod \"csi-node-driver-2bspg\" (UID: \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\") " pod="calico-system/csi-node-driver-2bspg" Aug 5 22:23:24.040123 kubelet[2541]: E0805 22:23:24.039577 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.040123 kubelet[2541]: W0805 22:23:24.039610 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.040123 kubelet[2541]: E0805 22:23:24.039643 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.040696 kubelet[2541]: E0805 22:23:24.040648 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.040696 kubelet[2541]: W0805 22:23:24.040671 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.040696 kubelet[2541]: E0805 22:23:24.040690 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.087234 containerd[1439]: time="2024-08-05T22:23:24.087166293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb684bff6-5jtlz,Uid:10c170b9-351f-4375-8c86-4c0b3f21ad5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a92bd7b2b1cb391151f4e6cfffe3a1d2b5c8be57868dc8a0cedb1f19ee7ff32\"" Aug 5 22:23:24.092336 kubelet[2541]: E0805 22:23:24.089106 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:24.131889 kubelet[2541]: E0805 22:23:24.131335 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.131889 kubelet[2541]: W0805 22:23:24.131413 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.131889 kubelet[2541]: E0805 22:23:24.131442 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.132544 kubelet[2541]: E0805 22:23:24.132529 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.132744 kubelet[2541]: W0805 22:23:24.132614 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.132744 kubelet[2541]: E0805 22:23:24.132652 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.133312 kubelet[2541]: E0805 22:23:24.133226 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.133312 kubelet[2541]: W0805 22:23:24.133245 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.133312 kubelet[2541]: E0805 22:23:24.133274 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.134637 kubelet[2541]: E0805 22:23:24.134608 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.134637 kubelet[2541]: W0805 22:23:24.134625 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.134974 kubelet[2541]: E0805 22:23:24.134761 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.135059 kubelet[2541]: E0805 22:23:24.135001 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.135059 kubelet[2541]: W0805 22:23:24.135014 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.135305 kubelet[2541]: E0805 22:23:24.135201 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.136773 kubelet[2541]: E0805 22:23:24.135837 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.136773 kubelet[2541]: W0805 22:23:24.136009 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.138235 kubelet[2541]: E0805 22:23:24.137855 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.138235 kubelet[2541]: E0805 22:23:24.138095 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.138235 kubelet[2541]: W0805 22:23:24.138109 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.138235 kubelet[2541]: E0805 22:23:24.138169 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.139998 kubelet[2541]: E0805 22:23:24.139955 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.139998 kubelet[2541]: W0805 22:23:24.139975 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.140257 kubelet[2541]: E0805 22:23:24.140141 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.141947 kubelet[2541]: E0805 22:23:24.141887 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.141947 kubelet[2541]: W0805 22:23:24.141905 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.142259 kubelet[2541]: E0805 22:23:24.142202 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.145250 kubelet[2541]: E0805 22:23:24.143659 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.145250 kubelet[2541]: W0805 22:23:24.143836 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.149643 kubelet[2541]: E0805 22:23:24.149593 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.150083 kubelet[2541]: E0805 22:23:24.149957 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.150083 kubelet[2541]: W0805 22:23:24.149980 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.150083 kubelet[2541]: E0805 22:23:24.150023 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.150879 kubelet[2541]: E0805 22:23:24.150818 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.150879 kubelet[2541]: W0805 22:23:24.150841 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.151119 kubelet[2541]: E0805 22:23:24.151079 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.151899 kubelet[2541]: E0805 22:23:24.151716 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.151899 kubelet[2541]: W0805 22:23:24.151729 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.151899 kubelet[2541]: E0805 22:23:24.151805 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.152548 kubelet[2541]: E0805 22:23:24.152209 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.152548 kubelet[2541]: W0805 22:23:24.152241 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.152548 kubelet[2541]: E0805 22:23:24.152346 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.153128 kubelet[2541]: E0805 22:23:24.152975 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.153128 kubelet[2541]: W0805 22:23:24.152992 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.153128 kubelet[2541]: E0805 22:23:24.153058 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.153563 kubelet[2541]: E0805 22:23:24.153504 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.153563 kubelet[2541]: W0805 22:23:24.153527 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.159438 kubelet[2541]: E0805 22:23:24.156993 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.163402 kubelet[2541]: E0805 22:23:24.162107 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.163402 kubelet[2541]: W0805 22:23:24.162142 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.163402 kubelet[2541]: E0805 22:23:24.162379 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.163716 kubelet[2541]: E0805 22:23:24.163478 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.163716 kubelet[2541]: W0805 22:23:24.163493 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.163871 kubelet[2541]: E0805 22:23:24.163774 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.169878 kubelet[2541]: E0805 22:23:24.167489 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.169878 kubelet[2541]: W0805 22:23:24.167520 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.169878 kubelet[2541]: E0805 22:23:24.167742 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.172425 kubelet[2541]: E0805 22:23:24.171499 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.172425 kubelet[2541]: W0805 22:23:24.171537 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.172425 kubelet[2541]: E0805 22:23:24.171808 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.174706 kubelet[2541]: E0805 22:23:24.174646 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.175012 kubelet[2541]: W0805 22:23:24.174862 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.176525 kubelet[2541]: E0805 22:23:24.175116 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.176525 kubelet[2541]: E0805 22:23:24.175788 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.176525 kubelet[2541]: W0805 22:23:24.175815 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.176525 kubelet[2541]: E0805 22:23:24.175881 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.176816 kubelet[2541]: E0805 22:23:24.176563 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.176816 kubelet[2541]: W0805 22:23:24.176577 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.176816 kubelet[2541]: E0805 22:23:24.176603 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.176970 kubelet[2541]: E0805 22:23:24.176932 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.176970 kubelet[2541]: W0805 22:23:24.176955 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.177109 kubelet[2541]: E0805 22:23:24.176983 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.231889 kubelet[2541]: E0805 22:23:24.231731 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.231889 kubelet[2541]: W0805 22:23:24.231768 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.231889 kubelet[2541]: E0805 22:23:24.231798 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.241147 kubelet[2541]: E0805 22:23:24.240978 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.241147 kubelet[2541]: W0805 22:23:24.241002 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.241147 kubelet[2541]: E0805 22:23:24.241030 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.290159 kubelet[2541]: E0805 22:23:24.288741 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:24.290159 kubelet[2541]: W0805 22:23:24.288767 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:24.290159 kubelet[2541]: E0805 22:23:24.288791 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:24.301046 containerd[1439]: time="2024-08-05T22:23:24.299296867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:23:24.654937 containerd[1439]: time="2024-08-05T22:23:24.652685429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:23:24.654937 containerd[1439]: time="2024-08-05T22:23:24.652756652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:24.654937 containerd[1439]: time="2024-08-05T22:23:24.652780517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:23:24.654937 containerd[1439]: time="2024-08-05T22:23:24.652805324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:23:24.696671 systemd[1]: Started cri-containerd-23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be.scope - libcontainer container 23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be. Aug 5 22:23:24.808924 containerd[1439]: time="2024-08-05T22:23:24.808860924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hmnj4,Uid:58dfb7c8-d7cf-4823-ad2e-c7aa63a51548,Namespace:calico-system,Attempt:0,} returns sandbox id \"23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be\"" Aug 5 22:23:24.811531 kubelet[2541]: E0805 22:23:24.810404 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:25.804321 kubelet[2541]: E0805 22:23:25.797215 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:27.798752 kubelet[2541]: E0805 22:23:27.798487 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:29.030612 containerd[1439]: time="2024-08-05T22:23:29.029058200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:23:29.030612 containerd[1439]: time="2024-08-05T22:23:29.029848124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:29.035329 containerd[1439]: time="2024-08-05T22:23:29.032370674Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:29.035329 containerd[1439]: time="2024-08-05T22:23:29.033156951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:29.035329 containerd[1439]: time="2024-08-05T22:23:29.035000315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.735646881s" Aug 5 22:23:29.035329 containerd[1439]: time="2024-08-05T22:23:29.035036914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:23:29.039407 containerd[1439]: time="2024-08-05T22:23:29.039340821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:23:29.091598 containerd[1439]: time="2024-08-05T22:23:29.091503927Z" level=info msg="CreateContainer within sandbox \"5a92bd7b2b1cb391151f4e6cfffe3a1d2b5c8be57868dc8a0cedb1f19ee7ff32\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:23:29.151132 containerd[1439]: time="2024-08-05T22:23:29.142623332Z" level=info msg="CreateContainer within sandbox \"5a92bd7b2b1cb391151f4e6cfffe3a1d2b5c8be57868dc8a0cedb1f19ee7ff32\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cfa987289e06e352d979217a07a68bc780906f69c02b981b7c528ed8e1f13809\"" Aug 5 22:23:29.151132 containerd[1439]: time="2024-08-05T22:23:29.143527030Z" level=info msg="StartContainer for \"cfa987289e06e352d979217a07a68bc780906f69c02b981b7c528ed8e1f13809\"" Aug 5 22:23:29.239727 systemd[1]: Started cri-containerd-cfa987289e06e352d979217a07a68bc780906f69c02b981b7c528ed8e1f13809.scope - libcontainer container cfa987289e06e352d979217a07a68bc780906f69c02b981b7c528ed8e1f13809. Aug 5 22:23:29.563167 containerd[1439]: time="2024-08-05T22:23:29.560521453Z" level=info msg="StartContainer for \"cfa987289e06e352d979217a07a68bc780906f69c02b981b7c528ed8e1f13809\" returns successfully" Aug 5 22:23:29.793892 kubelet[2541]: E0805 22:23:29.793838 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:30.068859 kubelet[2541]: E0805 22:23:30.068812 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:30.166375 kubelet[2541]: E0805 22:23:30.166312 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.166593 kubelet[2541]: W0805 22:23:30.166571 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.166687 kubelet[2541]: E0805 22:23:30.166670 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.177554 kubelet[2541]: E0805 22:23:30.177491 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.177554 kubelet[2541]: W0805 22:23:30.177526 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.177554 kubelet[2541]: E0805 22:23:30.177560 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.178018 kubelet[2541]: E0805 22:23:30.177972 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.178018 kubelet[2541]: W0805 22:23:30.177986 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.178018 kubelet[2541]: E0805 22:23:30.177998 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.178319 kubelet[2541]: E0805 22:23:30.178286 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.178319 kubelet[2541]: W0805 22:23:30.178300 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.178319 kubelet[2541]: E0805 22:23:30.178314 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.178718 kubelet[2541]: E0805 22:23:30.178665 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.178718 kubelet[2541]: W0805 22:23:30.178679 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.178718 kubelet[2541]: E0805 22:23:30.178694 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.178984 kubelet[2541]: E0805 22:23:30.178951 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.178984 kubelet[2541]: W0805 22:23:30.178965 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.178984 kubelet[2541]: E0805 22:23:30.178977 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.186462 kubelet[2541]: E0805 22:23:30.185293 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.186462 kubelet[2541]: W0805 22:23:30.185327 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.186462 kubelet[2541]: E0805 22:23:30.185368 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.186841 kubelet[2541]: E0805 22:23:30.186789 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.186841 kubelet[2541]: W0805 22:23:30.186808 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.186841 kubelet[2541]: E0805 22:23:30.186825 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.188434 kubelet[2541]: E0805 22:23:30.188399 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.188434 kubelet[2541]: W0805 22:23:30.188414 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.188434 kubelet[2541]: E0805 22:23:30.188429 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.188969 kubelet[2541]: E0805 22:23:30.188938 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.188969 kubelet[2541]: W0805 22:23:30.188957 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.188969 kubelet[2541]: E0805 22:23:30.188971 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.189274 kubelet[2541]: E0805 22:23:30.189245 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.189274 kubelet[2541]: W0805 22:23:30.189259 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.189274 kubelet[2541]: E0805 22:23:30.189271 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.189552 kubelet[2541]: E0805 22:23:30.189524 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.189552 kubelet[2541]: W0805 22:23:30.189538 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.189552 kubelet[2541]: E0805 22:23:30.189550 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.190561 kubelet[2541]: E0805 22:23:30.190525 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.190561 kubelet[2541]: W0805 22:23:30.190543 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.190561 kubelet[2541]: E0805 22:23:30.190557 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.190930 kubelet[2541]: E0805 22:23:30.190896 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.190930 kubelet[2541]: W0805 22:23:30.190911 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.190930 kubelet[2541]: E0805 22:23:30.190924 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.191307 kubelet[2541]: E0805 22:23:30.191269 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.191307 kubelet[2541]: W0805 22:23:30.191287 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.191307 kubelet[2541]: E0805 22:23:30.191304 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.198854 kubelet[2541]: E0805 22:23:30.192939 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.198854 kubelet[2541]: W0805 22:23:30.192964 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.198854 kubelet[2541]: E0805 22:23:30.192991 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.198854 kubelet[2541]: E0805 22:23:30.193460 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.198854 kubelet[2541]: W0805 22:23:30.193474 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.198854 kubelet[2541]: E0805 22:23:30.193495 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.198854 kubelet[2541]: E0805 22:23:30.194070 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.198854 kubelet[2541]: W0805 22:23:30.194083 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.198854 kubelet[2541]: E0805 22:23:30.194120 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.198854 kubelet[2541]: E0805 22:23:30.194669 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.202689 kubelet[2541]: W0805 22:23:30.194685 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.202689 kubelet[2541]: E0805 22:23:30.194711 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.202689 kubelet[2541]: E0805 22:23:30.194983 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.202689 kubelet[2541]: W0805 22:23:30.194996 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.202689 kubelet[2541]: E0805 22:23:30.195047 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.202689 kubelet[2541]: E0805 22:23:30.195336 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.202689 kubelet[2541]: W0805 22:23:30.195346 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.202689 kubelet[2541]: E0805 22:23:30.196022 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.202689 kubelet[2541]: E0805 22:23:30.196231 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.202689 kubelet[2541]: W0805 22:23:30.196242 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203043 kubelet[2541]: E0805 22:23:30.196305 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203043 kubelet[2541]: E0805 22:23:30.196526 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203043 kubelet[2541]: W0805 22:23:30.196539 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203043 kubelet[2541]: E0805 22:23:30.196640 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203043 kubelet[2541]: E0805 22:23:30.196882 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203043 kubelet[2541]: W0805 22:23:30.196894 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203043 kubelet[2541]: E0805 22:23:30.196918 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203043 kubelet[2541]: E0805 22:23:30.197550 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203043 kubelet[2541]: W0805 22:23:30.197562 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203043 kubelet[2541]: E0805 22:23:30.197598 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203408 kubelet[2541]: E0805 22:23:30.197925 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203408 kubelet[2541]: W0805 22:23:30.197935 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203408 kubelet[2541]: E0805 22:23:30.197988 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203408 kubelet[2541]: E0805 22:23:30.198620 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203408 kubelet[2541]: W0805 22:23:30.198638 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203408 kubelet[2541]: E0805 22:23:30.198698 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203408 kubelet[2541]: E0805 22:23:30.199390 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203408 kubelet[2541]: W0805 22:23:30.199402 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203408 kubelet[2541]: E0805 22:23:30.199429 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203408 kubelet[2541]: E0805 22:23:30.200792 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203806 kubelet[2541]: W0805 22:23:30.200806 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203806 kubelet[2541]: E0805 22:23:30.200828 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203806 kubelet[2541]: E0805 22:23:30.201884 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203806 kubelet[2541]: W0805 22:23:30.201902 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203806 kubelet[2541]: E0805 22:23:30.201934 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.203806 kubelet[2541]: E0805 22:23:30.202680 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203806 kubelet[2541]: W0805 22:23:30.202691 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203806 kubelet[2541]: E0805 22:23:30.203082 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.203806 kubelet[2541]: W0805 22:23:30.203093 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.203806 kubelet[2541]: E0805 22:23:30.203106 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.204158 kubelet[2541]: E0805 22:23:30.203153 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:30.209653 kubelet[2541]: E0805 22:23:30.204556 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:30.209653 kubelet[2541]: W0805 22:23:30.204572 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:30.209653 kubelet[2541]: E0805 22:23:30.204585 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.080179 kubelet[2541]: I0805 22:23:31.076960 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:23:31.080179 kubelet[2541]: E0805 22:23:31.077749 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:31.104596 kubelet[2541]: E0805 22:23:31.104149 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.104596 kubelet[2541]: W0805 22:23:31.104213 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.104596 kubelet[2541]: E0805 22:23:31.104275 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.108952 kubelet[2541]: E0805 22:23:31.108662 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.108952 kubelet[2541]: W0805 22:23:31.108682 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.108952 kubelet[2541]: E0805 22:23:31.108709 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.109475 kubelet[2541]: E0805 22:23:31.109078 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.109475 kubelet[2541]: W0805 22:23:31.109097 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.109475 kubelet[2541]: E0805 22:23:31.109112 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.114456 kubelet[2541]: E0805 22:23:31.112783 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.114456 kubelet[2541]: W0805 22:23:31.112802 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.114456 kubelet[2541]: E0805 22:23:31.112818 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.114456 kubelet[2541]: E0805 22:23:31.113224 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.114456 kubelet[2541]: W0805 22:23:31.113232 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.114456 kubelet[2541]: E0805 22:23:31.113246 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.114456 kubelet[2541]: E0805 22:23:31.113509 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.114456 kubelet[2541]: W0805 22:23:31.113519 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.114456 kubelet[2541]: E0805 22:23:31.113533 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.119723 kubelet[2541]: E0805 22:23:31.117128 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.119723 kubelet[2541]: W0805 22:23:31.117146 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.119723 kubelet[2541]: E0805 22:23:31.117165 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.125397 kubelet[2541]: E0805 22:23:31.120815 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.125397 kubelet[2541]: W0805 22:23:31.124263 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.125397 kubelet[2541]: E0805 22:23:31.125186 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.129141 kubelet[2541]: E0805 22:23:31.126922 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.129141 kubelet[2541]: W0805 22:23:31.126949 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.129141 kubelet[2541]: E0805 22:23:31.126983 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.129141 kubelet[2541]: E0805 22:23:31.127377 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.129141 kubelet[2541]: W0805 22:23:31.127388 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.129141 kubelet[2541]: E0805 22:23:31.127406 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.129141 kubelet[2541]: E0805 22:23:31.127653 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.129141 kubelet[2541]: W0805 22:23:31.127662 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.129141 kubelet[2541]: E0805 22:23:31.127680 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.129141 kubelet[2541]: E0805 22:23:31.127931 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.129540 kubelet[2541]: W0805 22:23:31.127939 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.129540 kubelet[2541]: E0805 22:23:31.127955 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.129540 kubelet[2541]: E0805 22:23:31.128283 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.129540 kubelet[2541]: W0805 22:23:31.128292 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.129540 kubelet[2541]: E0805 22:23:31.128305 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.129540 kubelet[2541]: E0805 22:23:31.128573 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.129540 kubelet[2541]: W0805 22:23:31.128583 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.129540 kubelet[2541]: E0805 22:23:31.128596 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.129540 kubelet[2541]: E0805 22:23:31.128845 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.129540 kubelet[2541]: W0805 22:23:31.128854 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.129846 kubelet[2541]: E0805 22:23:31.128871 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.130244 kubelet[2541]: E0805 22:23:31.130214 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.130244 kubelet[2541]: W0805 22:23:31.130231 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.130244 kubelet[2541]: E0805 22:23:31.130245 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.130636 kubelet[2541]: E0805 22:23:31.130608 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.130636 kubelet[2541]: W0805 22:23:31.130629 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.130721 kubelet[2541]: E0805 22:23:31.130644 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.131022 kubelet[2541]: E0805 22:23:31.130988 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.131022 kubelet[2541]: W0805 22:23:31.131016 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.131126 kubelet[2541]: E0805 22:23:31.131051 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.131603 kubelet[2541]: E0805 22:23:31.131575 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.131603 kubelet[2541]: W0805 22:23:31.131595 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.131697 kubelet[2541]: E0805 22:23:31.131618 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.134502 kubelet[2541]: E0805 22:23:31.134467 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.134502 kubelet[2541]: W0805 22:23:31.134492 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.134682 kubelet[2541]: E0805 22:23:31.134660 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.134968 kubelet[2541]: E0805 22:23:31.134833 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.134968 kubelet[2541]: W0805 22:23:31.134848 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.134968 kubelet[2541]: E0805 22:23:31.134905 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.135287 kubelet[2541]: E0805 22:23:31.135174 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.135287 kubelet[2541]: W0805 22:23:31.135188 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.135287 kubelet[2541]: E0805 22:23:31.135264 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.135503 kubelet[2541]: E0805 22:23:31.135493 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.135544 kubelet[2541]: W0805 22:23:31.135504 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.135596 kubelet[2541]: E0805 22:23:31.135578 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.136291 kubelet[2541]: E0805 22:23:31.136065 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.136291 kubelet[2541]: W0805 22:23:31.136136 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.136291 kubelet[2541]: E0805 22:23:31.136171 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.136839 kubelet[2541]: E0805 22:23:31.136725 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.136839 kubelet[2541]: W0805 22:23:31.136739 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.136839 kubelet[2541]: E0805 22:23:31.136777 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.139141 kubelet[2541]: E0805 22:23:31.137239 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.139141 kubelet[2541]: W0805 22:23:31.137251 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.139141 kubelet[2541]: E0805 22:23:31.137316 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.139141 kubelet[2541]: E0805 22:23:31.137609 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.139141 kubelet[2541]: W0805 22:23:31.137620 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.139141 kubelet[2541]: E0805 22:23:31.137648 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.139141 kubelet[2541]: E0805 22:23:31.138001 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.139141 kubelet[2541]: W0805 22:23:31.138015 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.139141 kubelet[2541]: E0805 22:23:31.138045 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.139141 kubelet[2541]: E0805 22:23:31.138509 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.139590 kubelet[2541]: W0805 22:23:31.138604 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.139590 kubelet[2541]: E0805 22:23:31.138701 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.139952 kubelet[2541]: E0805 22:23:31.139916 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.139952 kubelet[2541]: W0805 22:23:31.139937 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.140169 kubelet[2541]: E0805 22:23:31.140062 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.142341 kubelet[2541]: E0805 22:23:31.141246 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.142341 kubelet[2541]: W0805 22:23:31.141261 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.142341 kubelet[2541]: E0805 22:23:31.141282 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.143864 kubelet[2541]: E0805 22:23:31.143810 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.143864 kubelet[2541]: W0805 22:23:31.143834 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.143864 kubelet[2541]: E0805 22:23:31.143854 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.145623 kubelet[2541]: E0805 22:23:31.144608 2541 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:23:31.145623 kubelet[2541]: W0805 22:23:31.145066 2541 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:23:31.145972 kubelet[2541]: E0805 22:23:31.145898 2541 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:23:31.234398 containerd[1439]: time="2024-08-05T22:23:31.232299296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:31.234398 containerd[1439]: time="2024-08-05T22:23:31.234207661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:23:31.243234 containerd[1439]: time="2024-08-05T22:23:31.240100801Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:31.248295 containerd[1439]: time="2024-08-05T22:23:31.248204965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:31.248841 containerd[1439]: time="2024-08-05T22:23:31.248776970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 2.209368512s" Aug 5 22:23:31.248841 containerd[1439]: time="2024-08-05T22:23:31.248821013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:23:31.255433 containerd[1439]: time="2024-08-05T22:23:31.251239226Z" level=info msg="CreateContainer within sandbox \"23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:23:31.295335 containerd[1439]: time="2024-08-05T22:23:31.295243932Z" level=info msg="CreateContainer within sandbox \"23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e\"" Aug 5 22:23:31.304184 containerd[1439]: time="2024-08-05T22:23:31.296095582Z" level=info msg="StartContainer for \"edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e\"" Aug 5 22:23:31.375664 systemd[1]: Started cri-containerd-edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e.scope - libcontainer container edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e. Aug 5 22:23:31.541875 systemd[1]: cri-containerd-edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e.scope: Deactivated successfully. Aug 5 22:23:31.553942 containerd[1439]: time="2024-08-05T22:23:31.553903445Z" level=info msg="StartContainer for \"edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e\" returns successfully" Aug 5 22:23:31.634718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e-rootfs.mount: Deactivated successfully. Aug 5 22:23:31.764172 containerd[1439]: time="2024-08-05T22:23:31.761399845Z" level=info msg="shim disconnected" id=edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e namespace=k8s.io Aug 5 22:23:31.764172 containerd[1439]: time="2024-08-05T22:23:31.761466470Z" level=warning msg="cleaning up after shim disconnected" id=edf541ce33e694bef99edd1d161825197f5e276766d7b16eb4754f279d26903e namespace=k8s.io Aug 5 22:23:31.764172 containerd[1439]: time="2024-08-05T22:23:31.761476228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:23:31.798483 kubelet[2541]: E0805 22:23:31.793630 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:31.798695 containerd[1439]: time="2024-08-05T22:23:31.797242227Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:23:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:23:32.113912 kubelet[2541]: E0805 22:23:32.113302 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:32.122771 containerd[1439]: time="2024-08-05T22:23:32.122673011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:23:32.190189 kubelet[2541]: I0805 22:23:32.188943 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7fb684bff6-5jtlz" podStartSLOduration=4.243420175 podStartE2EDuration="9.188888379s" podCreationTimestamp="2024-08-05 22:23:23 +0000 UTC" firstStartedPulling="2024-08-05 22:23:24.090020159 +0000 UTC m=+22.418757707" lastFinishedPulling="2024-08-05 22:23:29.035488373 +0000 UTC m=+27.364225911" observedRunningTime="2024-08-05 22:23:30.111491165 +0000 UTC m=+28.440228723" watchObservedRunningTime="2024-08-05 22:23:32.188888379 +0000 UTC m=+30.517625927" Aug 5 22:23:33.794756 kubelet[2541]: E0805 22:23:33.794242 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:35.795211 kubelet[2541]: E0805 22:23:35.793641 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:37.796629 kubelet[2541]: E0805 22:23:37.794331 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:39.794913 kubelet[2541]: E0805 22:23:39.793995 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:41.648772 kubelet[2541]: I0805 22:23:41.645160 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:23:41.648772 kubelet[2541]: E0805 22:23:41.645882 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:41.797385 kubelet[2541]: E0805 22:23:41.794128 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:42.171307 kubelet[2541]: E0805 22:23:42.171225 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:42.310541 kernel: hrtimer: interrupt took 8430895 ns Aug 5 22:23:43.799634 kubelet[2541]: E0805 22:23:43.798998 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:43.809189 containerd[1439]: time="2024-08-05T22:23:43.807138610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:43.827020 containerd[1439]: time="2024-08-05T22:23:43.826834355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:23:43.847494 containerd[1439]: time="2024-08-05T22:23:43.847345192Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:43.862664 containerd[1439]: time="2024-08-05T22:23:43.862525873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:43.876568 containerd[1439]: time="2024-08-05T22:23:43.870552708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 11.747828341s" Aug 5 22:23:43.876568 containerd[1439]: time="2024-08-05T22:23:43.870618201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:23:43.885825 containerd[1439]: time="2024-08-05T22:23:43.882828587Z" level=info msg="CreateContainer within sandbox \"23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:23:44.022239 containerd[1439]: time="2024-08-05T22:23:44.019957606Z" level=info msg="CreateContainer within sandbox \"23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7\"" Aug 5 22:23:44.026024 containerd[1439]: time="2024-08-05T22:23:44.022704381Z" level=info msg="StartContainer for \"bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7\"" Aug 5 22:23:44.096842 systemd[1]: Started cri-containerd-bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7.scope - libcontainer container bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7. Aug 5 22:23:44.501011 containerd[1439]: time="2024-08-05T22:23:44.500123155Z" level=info msg="StartContainer for \"bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7\" returns successfully" Aug 5 22:23:45.202532 kubelet[2541]: E0805 22:23:45.201894 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:45.800138 kubelet[2541]: E0805 22:23:45.800054 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:46.196914 kubelet[2541]: E0805 22:23:46.194934 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:46.661226 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:60180.service - OpenSSH per-connection server daemon (10.0.0.1:60180). Aug 5 22:23:47.706651 sshd[3299]: Accepted publickey for core from 10.0.0.1 port 60180 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:23:47.709403 sshd[3299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:47.720945 systemd-logind[1426]: New session 8 of user core. Aug 5 22:23:47.734744 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:23:47.795796 kubelet[2541]: E0805 22:23:47.793753 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:48.118680 sshd[3299]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:48.124466 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:60180.service: Deactivated successfully. Aug 5 22:23:48.128027 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:23:48.130760 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:23:48.136539 systemd-logind[1426]: Removed session 8. Aug 5 22:23:48.972007 systemd[1]: cri-containerd-bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7.scope: Deactivated successfully. Aug 5 22:23:48.993284 kubelet[2541]: I0805 22:23:48.993223 2541 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:23:49.025158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7-rootfs.mount: Deactivated successfully. Aug 5 22:23:49.406512 kubelet[2541]: I0805 22:23:49.402939 2541 topology_manager.go:215] "Topology Admit Handler" podUID="7d175971-007a-4ccf-8d7c-0d4fdd69885a" podNamespace="kube-system" podName="coredns-76f75df574-vbwsm" Aug 5 22:23:49.406512 kubelet[2541]: I0805 22:23:49.404707 2541 topology_manager.go:215] "Topology Admit Handler" podUID="39d442b1-b745-42d5-aa8f-305f09d7421b" podNamespace="kube-system" podName="coredns-76f75df574-ndrgp" Aug 5 22:23:49.423890 kubelet[2541]: I0805 22:23:49.423639 2541 topology_manager.go:215] "Topology Admit Handler" podUID="fac418ef-ee88-4feb-ba03-6949e13d198f" podNamespace="calico-system" podName="calico-kube-controllers-69544c9d74-xldcd" Aug 5 22:23:49.441575 systemd[1]: Created slice kubepods-burstable-pod7d175971_007a_4ccf_8d7c_0d4fdd69885a.slice - libcontainer container kubepods-burstable-pod7d175971_007a_4ccf_8d7c_0d4fdd69885a.slice. Aug 5 22:23:49.465991 systemd[1]: Created slice kubepods-besteffort-podfac418ef_ee88_4feb_ba03_6949e13d198f.slice - libcontainer container kubepods-besteffort-podfac418ef_ee88_4feb_ba03_6949e13d198f.slice. Aug 5 22:23:49.489851 systemd[1]: Created slice kubepods-burstable-pod39d442b1_b745_42d5_aa8f_305f09d7421b.slice - libcontainer container kubepods-burstable-pod39d442b1_b745_42d5_aa8f_305f09d7421b.slice. Aug 5 22:23:49.552211 kubelet[2541]: I0805 22:23:49.551939 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d175971-007a-4ccf-8d7c-0d4fdd69885a-config-volume\") pod \"coredns-76f75df574-vbwsm\" (UID: \"7d175971-007a-4ccf-8d7c-0d4fdd69885a\") " pod="kube-system/coredns-76f75df574-vbwsm" Aug 5 22:23:49.552211 kubelet[2541]: I0805 22:23:49.552015 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7klh\" (UniqueName: \"kubernetes.io/projected/7d175971-007a-4ccf-8d7c-0d4fdd69885a-kube-api-access-n7klh\") pod \"coredns-76f75df574-vbwsm\" (UID: \"7d175971-007a-4ccf-8d7c-0d4fdd69885a\") " pod="kube-system/coredns-76f75df574-vbwsm" Aug 5 22:23:49.552211 kubelet[2541]: I0805 22:23:49.552050 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fac418ef-ee88-4feb-ba03-6949e13d198f-tigera-ca-bundle\") pod \"calico-kube-controllers-69544c9d74-xldcd\" (UID: \"fac418ef-ee88-4feb-ba03-6949e13d198f\") " pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" Aug 5 22:23:49.552211 kubelet[2541]: I0805 22:23:49.552079 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39d442b1-b745-42d5-aa8f-305f09d7421b-config-volume\") pod \"coredns-76f75df574-ndrgp\" (UID: \"39d442b1-b745-42d5-aa8f-305f09d7421b\") " pod="kube-system/coredns-76f75df574-ndrgp" Aug 5 22:23:49.552211 kubelet[2541]: I0805 22:23:49.552117 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9778b\" (UniqueName: \"kubernetes.io/projected/fac418ef-ee88-4feb-ba03-6949e13d198f-kube-api-access-9778b\") pod \"calico-kube-controllers-69544c9d74-xldcd\" (UID: \"fac418ef-ee88-4feb-ba03-6949e13d198f\") " pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" Aug 5 22:23:49.552680 kubelet[2541]: I0805 22:23:49.552145 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2tm\" (UniqueName: \"kubernetes.io/projected/39d442b1-b745-42d5-aa8f-305f09d7421b-kube-api-access-nb2tm\") pod \"coredns-76f75df574-ndrgp\" (UID: \"39d442b1-b745-42d5-aa8f-305f09d7421b\") " pod="kube-system/coredns-76f75df574-ndrgp" Aug 5 22:23:49.810523 systemd[1]: Created slice kubepods-besteffort-podeeaa21b6_36ba_499f_950d_4a8d4e76928e.slice - libcontainer container kubepods-besteffort-podeeaa21b6_36ba_499f_950d_4a8d4e76928e.slice. Aug 5 22:23:49.820046 containerd[1439]: time="2024-08-05T22:23:49.819940873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bspg,Uid:eeaa21b6-36ba-499f-950d-4a8d4e76928e,Namespace:calico-system,Attempt:0,}" Aug 5 22:23:50.054506 kubelet[2541]: E0805 22:23:50.052279 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:50.055510 containerd[1439]: time="2024-08-05T22:23:50.055294640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vbwsm,Uid:7d175971-007a-4ccf-8d7c-0d4fdd69885a,Namespace:kube-system,Attempt:0,}" Aug 5 22:23:50.081018 containerd[1439]: time="2024-08-05T22:23:50.080673979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69544c9d74-xldcd,Uid:fac418ef-ee88-4feb-ba03-6949e13d198f,Namespace:calico-system,Attempt:0,}" Aug 5 22:23:50.095206 kubelet[2541]: E0805 22:23:50.094146 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:50.095393 containerd[1439]: time="2024-08-05T22:23:50.094950195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ndrgp,Uid:39d442b1-b745-42d5-aa8f-305f09d7421b,Namespace:kube-system,Attempt:0,}" Aug 5 22:23:50.519862 containerd[1439]: time="2024-08-05T22:23:50.519670219Z" level=info msg="shim disconnected" id=bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7 namespace=k8s.io Aug 5 22:23:50.519862 containerd[1439]: time="2024-08-05T22:23:50.519746913Z" level=warning msg="cleaning up after shim disconnected" id=bb8529a6ae4e8fb19445db59edc50eec7c2aeff012ac44bbd52e055987cd25c7 namespace=k8s.io Aug 5 22:23:50.519862 containerd[1439]: time="2024-08-05T22:23:50.519760949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:23:50.588771 containerd[1439]: time="2024-08-05T22:23:50.588676065Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:23:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:23:51.316960 kubelet[2541]: E0805 22:23:51.315166 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:23:51.329311 containerd[1439]: time="2024-08-05T22:23:51.325861309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:23:52.212887 containerd[1439]: time="2024-08-05T22:23:52.212172723Z" level=error msg="Failed to destroy network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.240284 containerd[1439]: time="2024-08-05T22:23:52.236168634Z" level=error msg="encountered an error cleaning up failed sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.240284 containerd[1439]: time="2024-08-05T22:23:52.236309508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69544c9d74-xldcd,Uid:fac418ef-ee88-4feb-ba03-6949e13d198f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.240585 kubelet[2541]: E0805 22:23:52.236696 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.240585 kubelet[2541]: E0805 22:23:52.236775 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" Aug 5 22:23:52.240585 kubelet[2541]: E0805 22:23:52.236804 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" Aug 5 22:23:52.240730 kubelet[2541]: E0805 22:23:52.236887 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69544c9d74-xldcd_calico-system(fac418ef-ee88-4feb-ba03-6949e13d198f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69544c9d74-xldcd_calico-system(fac418ef-ee88-4feb-ba03-6949e13d198f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" podUID="fac418ef-ee88-4feb-ba03-6949e13d198f" Aug 5 22:23:52.261222 containerd[1439]: time="2024-08-05T22:23:52.258926252Z" level=error msg="Failed to destroy network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.274597 containerd[1439]: time="2024-08-05T22:23:52.272198253Z" level=error msg="encountered an error cleaning up failed sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.274597 containerd[1439]: time="2024-08-05T22:23:52.272306877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bspg,Uid:eeaa21b6-36ba-499f-950d-4a8d4e76928e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.289813 kubelet[2541]: E0805 22:23:52.284581 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.289813 kubelet[2541]: E0805 22:23:52.284656 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bspg" Aug 5 22:23:52.289813 kubelet[2541]: E0805 22:23:52.284704 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2bspg" Aug 5 22:23:52.290088 kubelet[2541]: E0805 22:23:52.284770 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2bspg_calico-system(eeaa21b6-36ba-499f-950d-4a8d4e76928e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2bspg_calico-system(eeaa21b6-36ba-499f-950d-4a8d4e76928e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:52.327127 kubelet[2541]: I0805 22:23:52.326885 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:23:52.345025 containerd[1439]: time="2024-08-05T22:23:52.328519141Z" level=error msg="Failed to destroy network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.345025 containerd[1439]: time="2024-08-05T22:23:52.335125537Z" level=info msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\"" Aug 5 22:23:52.346715 containerd[1439]: time="2024-08-05T22:23:52.346677690Z" level=info msg="Ensure that sandbox 99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae in task-service has been cleanup successfully" Aug 5 22:23:52.350613 containerd[1439]: time="2024-08-05T22:23:52.349306683Z" level=error msg="encountered an error cleaning up failed sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.350613 containerd[1439]: time="2024-08-05T22:23:52.349404296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ndrgp,Uid:39d442b1-b745-42d5-aa8f-305f09d7421b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.351187 kubelet[2541]: E0805 22:23:52.349628 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.351187 kubelet[2541]: E0805 22:23:52.349682 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ndrgp" Aug 5 22:23:52.351187 kubelet[2541]: E0805 22:23:52.349710 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ndrgp" Aug 5 22:23:52.352175 kubelet[2541]: E0805 22:23:52.349779 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ndrgp_kube-system(39d442b1-b745-42d5-aa8f-305f09d7421b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ndrgp_kube-system(39d442b1-b745-42d5-aa8f-305f09d7421b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ndrgp" podUID="39d442b1-b745-42d5-aa8f-305f09d7421b" Aug 5 22:23:52.355565 kubelet[2541]: I0805 22:23:52.355535 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:23:52.361958 containerd[1439]: time="2024-08-05T22:23:52.359926708Z" level=info msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\"" Aug 5 22:23:52.366898 containerd[1439]: time="2024-08-05T22:23:52.366841571Z" level=info msg="Ensure that sandbox b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c in task-service has been cleanup successfully" Aug 5 22:23:52.414689 containerd[1439]: time="2024-08-05T22:23:52.414624121Z" level=error msg="Failed to destroy network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.416490 containerd[1439]: time="2024-08-05T22:23:52.416450838Z" level=error msg="encountered an error cleaning up failed sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.416783 containerd[1439]: time="2024-08-05T22:23:52.416741323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vbwsm,Uid:7d175971-007a-4ccf-8d7c-0d4fdd69885a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.417167 kubelet[2541]: E0805 22:23:52.417140 2541 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.417423 kubelet[2541]: E0805 22:23:52.417408 2541 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vbwsm" Aug 5 22:23:52.422850 kubelet[2541]: E0805 22:23:52.422767 2541 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vbwsm" Aug 5 22:23:52.423151 kubelet[2541]: E0805 22:23:52.423126 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vbwsm_kube-system(7d175971-007a-4ccf-8d7c-0d4fdd69885a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vbwsm_kube-system(7d175971-007a-4ccf-8d7c-0d4fdd69885a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vbwsm" podUID="7d175971-007a-4ccf-8d7c-0d4fdd69885a" Aug 5 22:23:52.457554 containerd[1439]: time="2024-08-05T22:23:52.456764023Z" level=error msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" failed" error="failed to destroy network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.457928 kubelet[2541]: E0805 22:23:52.457411 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:23:52.457928 kubelet[2541]: E0805 22:23:52.457511 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae"} Aug 5 22:23:52.457928 kubelet[2541]: E0805 22:23:52.457558 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fac418ef-ee88-4feb-ba03-6949e13d198f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:23:52.457928 kubelet[2541]: E0805 22:23:52.457622 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fac418ef-ee88-4feb-ba03-6949e13d198f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" podUID="fac418ef-ee88-4feb-ba03-6949e13d198f" Aug 5 22:23:52.504307 containerd[1439]: time="2024-08-05T22:23:52.504195263Z" level=error msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" failed" error="failed to destroy network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:52.504668 kubelet[2541]: E0805 22:23:52.504598 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:23:52.504668 kubelet[2541]: E0805 22:23:52.504666 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c"} Aug 5 22:23:52.504800 kubelet[2541]: E0805 22:23:52.504714 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:23:52.504800 kubelet[2541]: E0805 22:23:52.504753 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:23:52.520295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84-shm.mount: Deactivated successfully. Aug 5 22:23:52.520457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c-shm.mount: Deactivated successfully. Aug 5 22:23:52.527802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae-shm.mount: Deactivated successfully. Aug 5 22:23:53.165691 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:57624.service - OpenSSH per-connection server daemon (10.0.0.1:57624). Aug 5 22:23:53.358412 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 57624 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:23:53.364977 kubelet[2541]: I0805 22:23:53.361253 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:23:53.363884 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:53.366762 kubelet[2541]: I0805 22:23:53.366332 2541 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:23:53.367387 containerd[1439]: time="2024-08-05T22:23:53.367314478Z" level=info msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\"" Aug 5 22:23:53.367707 containerd[1439]: time="2024-08-05T22:23:53.367681377Z" level=info msg="Ensure that sandbox bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da in task-service has been cleanup successfully" Aug 5 22:23:53.368409 containerd[1439]: time="2024-08-05T22:23:53.367835065Z" level=info msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\"" Aug 5 22:23:53.368409 containerd[1439]: time="2024-08-05T22:23:53.368118176Z" level=info msg="Ensure that sandbox 1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84 in task-service has been cleanup successfully" Aug 5 22:23:53.414254 systemd-logind[1426]: New session 9 of user core. Aug 5 22:23:53.425649 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:23:53.454687 containerd[1439]: time="2024-08-05T22:23:53.454487526Z" level=error msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" failed" error="failed to destroy network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:53.456411 kubelet[2541]: E0805 22:23:53.456379 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:23:53.456547 kubelet[2541]: E0805 22:23:53.456442 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84"} Aug 5 22:23:53.456547 kubelet[2541]: E0805 22:23:53.456496 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39d442b1-b745-42d5-aa8f-305f09d7421b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:23:53.456547 kubelet[2541]: E0805 22:23:53.456535 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39d442b1-b745-42d5-aa8f-305f09d7421b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ndrgp" podUID="39d442b1-b745-42d5-aa8f-305f09d7421b" Aug 5 22:23:53.457834 containerd[1439]: time="2024-08-05T22:23:53.457768813Z" level=error msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" failed" error="failed to destroy network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:23:53.458047 kubelet[2541]: E0805 22:23:53.458000 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:23:53.458047 kubelet[2541]: E0805 22:23:53.458039 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da"} Aug 5 22:23:53.458149 kubelet[2541]: E0805 22:23:53.458070 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d175971-007a-4ccf-8d7c-0d4fdd69885a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:23:53.458149 kubelet[2541]: E0805 22:23:53.458097 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d175971-007a-4ccf-8d7c-0d4fdd69885a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vbwsm" podUID="7d175971-007a-4ccf-8d7c-0d4fdd69885a" Aug 5 22:23:53.708074 sshd[3536]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:53.731732 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:57624.service: Deactivated successfully. Aug 5 22:23:53.745428 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:23:53.761557 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:23:53.764018 systemd-logind[1426]: Removed session 9. Aug 5 22:23:58.737949 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:57630.service - OpenSSH per-connection server daemon (10.0.0.1:57630). Aug 5 22:23:58.834734 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 57630 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:23:58.837902 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:58.863309 systemd-logind[1426]: New session 10 of user core. Aug 5 22:23:58.873681 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:23:59.384663 sshd[3609]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:59.405775 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:23:59.412329 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:57630.service: Deactivated successfully. Aug 5 22:23:59.417898 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:23:59.433060 systemd-logind[1426]: Removed session 10. Aug 5 22:24:03.798426 containerd[1439]: time="2024-08-05T22:24:03.798368427Z" level=info msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\"" Aug 5 22:24:03.909977 containerd[1439]: time="2024-08-05T22:24:03.909302119Z" level=error msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" failed" error="failed to destroy network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:03.917363 kubelet[2541]: E0805 22:24:03.913615 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:24:03.917363 kubelet[2541]: E0805 22:24:03.913677 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da"} Aug 5 22:24:03.917363 kubelet[2541]: E0805 22:24:03.913731 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d175971-007a-4ccf-8d7c-0d4fdd69885a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:03.917363 kubelet[2541]: E0805 22:24:03.913772 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d175971-007a-4ccf-8d7c-0d4fdd69885a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vbwsm" podUID="7d175971-007a-4ccf-8d7c-0d4fdd69885a" Aug 5 22:24:04.435854 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:44290.service - OpenSSH per-connection server daemon (10.0.0.1:44290). Aug 5 22:24:04.559600 sshd[3659]: Accepted publickey for core from 10.0.0.1 port 44290 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:04.565816 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:04.591436 systemd-logind[1426]: New session 11 of user core. Aug 5 22:24:04.601157 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:24:04.799345 containerd[1439]: time="2024-08-05T22:24:04.794535923Z" level=info msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\"" Aug 5 22:24:04.799345 containerd[1439]: time="2024-08-05T22:24:04.795030033Z" level=info msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\"" Aug 5 22:24:04.961629 containerd[1439]: time="2024-08-05T22:24:04.961566114Z" level=error msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" failed" error="failed to destroy network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:04.963389 kubelet[2541]: E0805 22:24:04.963131 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:24:04.963389 kubelet[2541]: E0805 22:24:04.963200 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84"} Aug 5 22:24:04.963389 kubelet[2541]: E0805 22:24:04.963265 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39d442b1-b745-42d5-aa8f-305f09d7421b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:04.963389 kubelet[2541]: E0805 22:24:04.963315 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39d442b1-b745-42d5-aa8f-305f09d7421b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ndrgp" podUID="39d442b1-b745-42d5-aa8f-305f09d7421b" Aug 5 22:24:04.970928 sshd[3659]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:04.972572 containerd[1439]: time="2024-08-05T22:24:04.972063635Z" level=error msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" failed" error="failed to destroy network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:04.972666 kubelet[2541]: E0805 22:24:04.972382 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:24:04.972666 kubelet[2541]: E0805 22:24:04.972444 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae"} Aug 5 22:24:04.972666 kubelet[2541]: E0805 22:24:04.972495 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fac418ef-ee88-4feb-ba03-6949e13d198f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:04.972666 kubelet[2541]: E0805 22:24:04.972535 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fac418ef-ee88-4feb-ba03-6949e13d198f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" podUID="fac418ef-ee88-4feb-ba03-6949e13d198f" Aug 5 22:24:04.979857 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:44290.service: Deactivated successfully. Aug 5 22:24:04.987885 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:24:04.992917 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:24:05.022046 systemd-logind[1426]: Removed session 11. Aug 5 22:24:06.794366 containerd[1439]: time="2024-08-05T22:24:06.794296695Z" level=info msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\"" Aug 5 22:24:06.935431 containerd[1439]: time="2024-08-05T22:24:06.934497378Z" level=error msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" failed" error="failed to destroy network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:06.935834 kubelet[2541]: E0805 22:24:06.934837 2541 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:24:06.935834 kubelet[2541]: E0805 22:24:06.934895 2541 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c"} Aug 5 22:24:06.935834 kubelet[2541]: E0805 22:24:06.934947 2541 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:06.935834 kubelet[2541]: E0805 22:24:06.934988 2541 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eeaa21b6-36ba-499f-950d-4a8d4e76928e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2bspg" podUID="eeaa21b6-36ba-499f-950d-4a8d4e76928e" Aug 5 22:24:07.786974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262225703.mount: Deactivated successfully. Aug 5 22:24:08.872187 containerd[1439]: time="2024-08-05T22:24:08.870860120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:08.884929 containerd[1439]: time="2024-08-05T22:24:08.884828706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:24:08.903397 containerd[1439]: time="2024-08-05T22:24:08.901170117Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:08.949331 containerd[1439]: time="2024-08-05T22:24:08.947752784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:08.949331 containerd[1439]: time="2024-08-05T22:24:08.948775128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 17.622862943s" Aug 5 22:24:08.949331 containerd[1439]: time="2024-08-05T22:24:08.948809713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:24:09.026558 containerd[1439]: time="2024-08-05T22:24:09.023901484Z" level=info msg="CreateContainer within sandbox \"23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:24:09.464300 containerd[1439]: time="2024-08-05T22:24:09.464054719Z" level=info msg="CreateContainer within sandbox \"23354943e61f20c1616350fc18bec8f203835af38a936e0c7ee16ec6fb5c58be\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2f622236d30d4e8ae0d8c64f9d5165f2be035485ea04553c541786b71afd19ef\"" Aug 5 22:24:09.467399 containerd[1439]: time="2024-08-05T22:24:09.465672733Z" level=info msg="StartContainer for \"2f622236d30d4e8ae0d8c64f9d5165f2be035485ea04553c541786b71afd19ef\"" Aug 5 22:24:09.777716 systemd[1]: Started cri-containerd-2f622236d30d4e8ae0d8c64f9d5165f2be035485ea04553c541786b71afd19ef.scope - libcontainer container 2f622236d30d4e8ae0d8c64f9d5165f2be035485ea04553c541786b71afd19ef. Aug 5 22:24:09.966966 systemd[1]: run-containerd-runc-k8s.io-2f622236d30d4e8ae0d8c64f9d5165f2be035485ea04553c541786b71afd19ef-runc.JUFBuR.mount: Deactivated successfully. Aug 5 22:24:10.024590 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:44302.service - OpenSSH per-connection server daemon (10.0.0.1:44302). Aug 5 22:24:10.216734 containerd[1439]: time="2024-08-05T22:24:10.216255163Z" level=info msg="StartContainer for \"2f622236d30d4e8ae0d8c64f9d5165f2be035485ea04553c541786b71afd19ef\" returns successfully" Aug 5 22:24:10.356478 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 44302 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:10.365453 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:10.386458 systemd-logind[1426]: New session 12 of user core. Aug 5 22:24:10.422420 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:24:10.422473 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:24:10.420231 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:24:10.605809 kubelet[2541]: E0805 22:24:10.602827 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:10.730380 systemd[1]: run-containerd-runc-k8s.io-2f622236d30d4e8ae0d8c64f9d5165f2be035485ea04553c541786b71afd19ef-runc.5ldJnL.mount: Deactivated successfully. Aug 5 22:24:10.736755 sshd[3781]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:10.755270 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:44302.service: Deactivated successfully. Aug 5 22:24:10.778917 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:24:10.795221 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:24:10.803097 systemd-logind[1426]: Removed session 12. Aug 5 22:24:11.606438 kubelet[2541]: E0805 22:24:11.605674 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:13.438277 systemd-networkd[1380]: vxlan.calico: Link UP Aug 5 22:24:13.438292 systemd-networkd[1380]: vxlan.calico: Gained carrier Aug 5 22:24:14.981563 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Aug 5 22:24:15.714168 kubelet[2541]: E0805 22:24:15.713152 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:15.787853 kubelet[2541]: I0805 22:24:15.784829 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hmnj4" podStartSLOduration=8.651596016 podStartE2EDuration="52.784774395s" podCreationTimestamp="2024-08-05 22:23:23 +0000 UTC" firstStartedPulling="2024-08-05 22:23:24.816705395 +0000 UTC m=+23.145442933" lastFinishedPulling="2024-08-05 22:24:08.949883774 +0000 UTC m=+67.278621312" observedRunningTime="2024-08-05 22:24:10.656419979 +0000 UTC m=+68.985157517" watchObservedRunningTime="2024-08-05 22:24:15.784774395 +0000 UTC m=+74.113511934" Aug 5 22:24:15.829706 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:58302.service - OpenSSH per-connection server daemon (10.0.0.1:58302). Aug 5 22:24:15.969708 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 58302 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:15.973789 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:16.001069 systemd-logind[1426]: New session 13 of user core. Aug 5 22:24:16.006163 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:24:16.377476 sshd[4102]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:16.390311 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:58302.service: Deactivated successfully. Aug 5 22:24:16.410696 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:24:16.418862 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:24:16.420149 systemd-logind[1426]: Removed session 13. Aug 5 22:24:16.802152 containerd[1439]: time="2024-08-05T22:24:16.801638954Z" level=info msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\"" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.059 [INFO][4134] k8s.go 608: Cleaning up netns ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.059 [INFO][4134] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" iface="eth0" netns="/var/run/netns/cni-f55df254-11b1-9191-5bee-08d2bcbeebd1" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.060 [INFO][4134] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" iface="eth0" netns="/var/run/netns/cni-f55df254-11b1-9191-5bee-08d2bcbeebd1" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.061 [INFO][4134] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" iface="eth0" netns="/var/run/netns/cni-f55df254-11b1-9191-5bee-08d2bcbeebd1" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.061 [INFO][4134] k8s.go 615: Releasing IP address(es) ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.061 [INFO][4134] utils.go 188: Calico CNI releasing IP address ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.327 [INFO][4142] ipam_plugin.go 411: Releasing address using handleID ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.329 [INFO][4142] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.329 [INFO][4142] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.366 [WARNING][4142] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.367 [INFO][4142] ipam_plugin.go 439: Releasing address using workloadID ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.384 [INFO][4142] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:17.402085 containerd[1439]: 2024-08-05 22:24:17.394 [INFO][4134] k8s.go 621: Teardown processing complete. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:24:17.402085 containerd[1439]: time="2024-08-05T22:24:17.399587521Z" level=info msg="TearDown network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" successfully" Aug 5 22:24:17.402085 containerd[1439]: time="2024-08-05T22:24:17.399622517Z" level=info msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" returns successfully" Aug 5 22:24:17.405211 containerd[1439]: time="2024-08-05T22:24:17.404501020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69544c9d74-xldcd,Uid:fac418ef-ee88-4feb-ba03-6949e13d198f,Namespace:calico-system,Attempt:1,}" Aug 5 22:24:17.408474 systemd[1]: run-netns-cni\x2df55df254\x2d11b1\x2d9191\x2d5bee\x2d08d2bcbeebd1.mount: Deactivated successfully. Aug 5 22:24:17.800286 containerd[1439]: time="2024-08-05T22:24:17.797338259Z" level=info msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\"" Aug 5 22:24:17.800286 containerd[1439]: time="2024-08-05T22:24:17.798418680Z" level=info msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\"" Aug 5 22:24:17.926673 systemd-networkd[1380]: cali036d5724a46: Link UP Aug 5 22:24:17.930544 systemd-networkd[1380]: cali036d5724a46: Gained carrier Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.578 [INFO][4150] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0 calico-kube-controllers-69544c9d74- calico-system fac418ef-ee88-4feb-ba03-6949e13d198f 916 0 2024-08-05 22:23:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69544c9d74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69544c9d74-xldcd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali036d5724a46 [] []}} ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.578 [INFO][4150] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.728 [INFO][4164] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" HandleID="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.745 [INFO][4164] ipam_plugin.go 264: Auto assigning IP ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" HandleID="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000509f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69544c9d74-xldcd", "timestamp":"2024-08-05 22:24:17.728304576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.745 [INFO][4164] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.745 [INFO][4164] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.745 [INFO][4164] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.750 [INFO][4164] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.773 [INFO][4164] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.783 [INFO][4164] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.793 [INFO][4164] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.800 [INFO][4164] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.807 [INFO][4164] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.821 [INFO][4164] ipam.go 1685: Creating new handle: k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40 Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.851 [INFO][4164] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.872 [INFO][4164] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.872 [INFO][4164] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" host="localhost" Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.872 [INFO][4164] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:17.960984 containerd[1439]: 2024-08-05 22:24:17.872 [INFO][4164] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" HandleID="k8s-pod-network.9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.962059 containerd[1439]: 2024-08-05 22:24:17.908 [INFO][4150] k8s.go 386: Populated endpoint ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0", GenerateName:"calico-kube-controllers-69544c9d74-", Namespace:"calico-system", SelfLink:"", UID:"fac418ef-ee88-4feb-ba03-6949e13d198f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69544c9d74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69544c9d74-xldcd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali036d5724a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:17.962059 containerd[1439]: 2024-08-05 22:24:17.908 [INFO][4150] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.962059 containerd[1439]: 2024-08-05 22:24:17.908 [INFO][4150] dataplane_linux.go 68: Setting the host side veth name to cali036d5724a46 ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.962059 containerd[1439]: 2024-08-05 22:24:17.925 [INFO][4150] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:17.962059 containerd[1439]: 2024-08-05 22:24:17.926 [INFO][4150] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0", GenerateName:"calico-kube-controllers-69544c9d74-", Namespace:"calico-system", SelfLink:"", UID:"fac418ef-ee88-4feb-ba03-6949e13d198f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69544c9d74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40", Pod:"calico-kube-controllers-69544c9d74-xldcd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali036d5724a46", MAC:"8a:18:47:cb:3a:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:17.962059 containerd[1439]: 2024-08-05 22:24:17.944 [INFO][4150] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40" Namespace="calico-system" Pod="calico-kube-controllers-69544c9d74-xldcd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:17.960 [INFO][4208] k8s.go 608: Cleaning up netns ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:17.960 [INFO][4208] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" iface="eth0" netns="/var/run/netns/cni-998dc5d1-a9df-50dc-c2de-2f2fb11f83db" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:17.960 [INFO][4208] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" iface="eth0" netns="/var/run/netns/cni-998dc5d1-a9df-50dc-c2de-2f2fb11f83db" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:17.962 [INFO][4208] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" iface="eth0" netns="/var/run/netns/cni-998dc5d1-a9df-50dc-c2de-2f2fb11f83db" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:17.962 [INFO][4208] k8s.go 615: Releasing IP address(es) ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:17.962 [INFO][4208] utils.go 188: Calico CNI releasing IP address ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:18.030 [INFO][4225] ipam_plugin.go 411: Releasing address using handleID ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:18.031 [INFO][4225] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:18.031 [INFO][4225] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:18.045 [WARNING][4225] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:18.046 [INFO][4225] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:18.048 [INFO][4225] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:18.055780 containerd[1439]: 2024-08-05 22:24:18.052 [INFO][4208] k8s.go 621: Teardown processing complete. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:24:18.059810 systemd[1]: run-netns-cni\x2d998dc5d1\x2da9df\x2d50dc\x2dc2de\x2d2f2fb11f83db.mount: Deactivated successfully. Aug 5 22:24:18.063868 kubelet[2541]: E0805 22:24:18.062294 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:18.064437 containerd[1439]: time="2024-08-05T22:24:18.061670866Z" level=info msg="TearDown network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" successfully" Aug 5 22:24:18.064437 containerd[1439]: time="2024-08-05T22:24:18.061713827Z" level=info msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" returns successfully" Aug 5 22:24:18.077650 containerd[1439]: time="2024-08-05T22:24:18.077599155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vbwsm,Uid:7d175971-007a-4ccf-8d7c-0d4fdd69885a,Namespace:kube-system,Attempt:1,}" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:17.992 [INFO][4198] k8s.go 608: Cleaning up netns ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:17.992 [INFO][4198] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" iface="eth0" netns="/var/run/netns/cni-7b0208f6-10ac-02a5-1c16-0b22612e38ea" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:17.995 [INFO][4198] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" iface="eth0" netns="/var/run/netns/cni-7b0208f6-10ac-02a5-1c16-0b22612e38ea" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:17.995 [INFO][4198] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" iface="eth0" netns="/var/run/netns/cni-7b0208f6-10ac-02a5-1c16-0b22612e38ea" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:17.995 [INFO][4198] k8s.go 615: Releasing IP address(es) ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:17.995 [INFO][4198] utils.go 188: Calico CNI releasing IP address ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:18.053 [INFO][4239] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:18.053 [INFO][4239] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:18.054 [INFO][4239] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:18.062 [WARNING][4239] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:18.063 [INFO][4239] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:18.077 [INFO][4239] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:18.088542 containerd[1439]: 2024-08-05 22:24:18.081 [INFO][4198] k8s.go 621: Teardown processing complete. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:24:18.089623 containerd[1439]: time="2024-08-05T22:24:18.089048894Z" level=info msg="TearDown network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" successfully" Aug 5 22:24:18.089623 containerd[1439]: time="2024-08-05T22:24:18.089085002Z" level=info msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" returns successfully" Aug 5 22:24:18.089695 kubelet[2541]: E0805 22:24:18.089652 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:18.095545 containerd[1439]: time="2024-08-05T22:24:18.090402450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:18.095545 containerd[1439]: time="2024-08-05T22:24:18.090484824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:18.095545 containerd[1439]: time="2024-08-05T22:24:18.090519780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:18.095545 containerd[1439]: time="2024-08-05T22:24:18.090539797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:18.095797 containerd[1439]: time="2024-08-05T22:24:18.095667940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ndrgp,Uid:39d442b1-b745-42d5-aa8f-305f09d7421b,Namespace:kube-system,Attempt:1,}" Aug 5 22:24:18.098634 systemd[1]: run-netns-cni\x2d7b0208f6\x2d10ac\x2d02a5\x2d1c16\x2d0b22612e38ea.mount: Deactivated successfully. Aug 5 22:24:18.145279 systemd[1]: Started cri-containerd-9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40.scope - libcontainer container 9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40. Aug 5 22:24:18.183011 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:24:18.287079 containerd[1439]: time="2024-08-05T22:24:18.277733936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69544c9d74-xldcd,Uid:fac418ef-ee88-4feb-ba03-6949e13d198f,Namespace:calico-system,Attempt:1,} returns sandbox id \"9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40\"" Aug 5 22:24:18.287079 containerd[1439]: time="2024-08-05T22:24:18.286580992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:24:18.580425 systemd-networkd[1380]: cali7d7d9503211: Link UP Aug 5 22:24:18.580817 systemd-networkd[1380]: cali7d7d9503211: Gained carrier Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.345 [INFO][4291] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--ndrgp-eth0 coredns-76f75df574- kube-system 39d442b1-b745-42d5-aa8f-305f09d7421b 924 0 2024-08-05 22:23:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-ndrgp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d7d9503211 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.345 [INFO][4291] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.443 [INFO][4321] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" HandleID="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.470 [INFO][4321] ipam_plugin.go 264: Auto assigning IP ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" HandleID="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc650), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-ndrgp", "timestamp":"2024-08-05 22:24:18.443530513 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.471 [INFO][4321] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.471 [INFO][4321] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.471 [INFO][4321] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.487 [INFO][4321] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.505 [INFO][4321] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.521 [INFO][4321] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.531 [INFO][4321] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.538 [INFO][4321] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.538 [INFO][4321] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.541 [INFO][4321] ipam.go 1685: Creating new handle: k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73 Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.553 [INFO][4321] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.564 [INFO][4321] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.564 [INFO][4321] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" host="localhost" Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.564 [INFO][4321] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:18.630273 containerd[1439]: 2024-08-05 22:24:18.564 [INFO][4321] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" HandleID="k8s-pod-network.f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.631103 containerd[1439]: 2024-08-05 22:24:18.569 [INFO][4291] k8s.go 386: Populated endpoint ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ndrgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39d442b1-b745-42d5-aa8f-305f09d7421b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-ndrgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d7d9503211", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:18.631103 containerd[1439]: 2024-08-05 22:24:18.569 [INFO][4291] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.631103 containerd[1439]: 2024-08-05 22:24:18.569 [INFO][4291] dataplane_linux.go 68: Setting the host side veth name to cali7d7d9503211 ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.631103 containerd[1439]: 2024-08-05 22:24:18.575 [INFO][4291] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.631103 containerd[1439]: 2024-08-05 22:24:18.576 [INFO][4291] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ndrgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39d442b1-b745-42d5-aa8f-305f09d7421b", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73", Pod:"coredns-76f75df574-ndrgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d7d9503211", MAC:"52:3e:58:f8:17:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:18.631103 containerd[1439]: 2024-08-05 22:24:18.616 [INFO][4291] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73" Namespace="kube-system" Pod="coredns-76f75df574-ndrgp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:24:18.724993 containerd[1439]: time="2024-08-05T22:24:18.724423994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:18.724993 containerd[1439]: time="2024-08-05T22:24:18.724512991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:18.724993 containerd[1439]: time="2024-08-05T22:24:18.724547595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:18.724993 containerd[1439]: time="2024-08-05T22:24:18.724564988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:18.775465 systemd-networkd[1380]: calice2b774dbbd: Link UP Aug 5 22:24:18.775737 systemd-networkd[1380]: calice2b774dbbd: Gained carrier Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.304 [INFO][4280] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--vbwsm-eth0 coredns-76f75df574- kube-system 7d175971-007a-4ccf-8d7c-0d4fdd69885a 923 0 2024-08-05 22:23:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-vbwsm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice2b774dbbd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.304 [INFO][4280] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.432 [INFO][4315] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" HandleID="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.472 [INFO][4315] ipam_plugin.go 264: Auto assigning IP ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" HandleID="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006102e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-vbwsm", "timestamp":"2024-08-05 22:24:18.432419841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.472 [INFO][4315] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.565 [INFO][4315] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.565 [INFO][4315] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.590 [INFO][4315] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.639 [INFO][4315] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.683 [INFO][4315] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.697 [INFO][4315] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.710 [INFO][4315] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.710 [INFO][4315] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.718 [INFO][4315] ipam.go 1685: Creating new handle: k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35 Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.730 [INFO][4315] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.760 [INFO][4315] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.760 [INFO][4315] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" host="localhost" Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.760 [INFO][4315] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:18.818560 containerd[1439]: 2024-08-05 22:24:18.760 [INFO][4315] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" HandleID="k8s-pod-network.41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.819309 containerd[1439]: 2024-08-05 22:24:18.769 [INFO][4280] k8s.go 386: Populated endpoint ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--vbwsm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7d175971-007a-4ccf-8d7c-0d4fdd69885a", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-vbwsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice2b774dbbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:18.819309 containerd[1439]: 2024-08-05 22:24:18.770 [INFO][4280] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.819309 containerd[1439]: 2024-08-05 22:24:18.770 [INFO][4280] dataplane_linux.go 68: Setting the host side veth name to calice2b774dbbd ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.819309 containerd[1439]: 2024-08-05 22:24:18.776 [INFO][4280] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.819309 containerd[1439]: 2024-08-05 22:24:18.777 [INFO][4280] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--vbwsm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7d175971-007a-4ccf-8d7c-0d4fdd69885a", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35", Pod:"coredns-76f75df574-vbwsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice2b774dbbd", MAC:"b6:26:bb:19:ca:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:18.819309 containerd[1439]: 2024-08-05 22:24:18.810 [INFO][4280] k8s.go 500: Wrote updated endpoint to datastore ContainerID="41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35" Namespace="kube-system" Pod="coredns-76f75df574-vbwsm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:24:18.824891 systemd[1]: run-containerd-runc-k8s.io-f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73-runc.1iXCS7.mount: Deactivated successfully. Aug 5 22:24:18.849180 systemd[1]: Started cri-containerd-f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73.scope - libcontainer container f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73. Aug 5 22:24:18.882313 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:24:18.882544 containerd[1439]: time="2024-08-05T22:24:18.881755001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:18.882544 containerd[1439]: time="2024-08-05T22:24:18.881845562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:18.882544 containerd[1439]: time="2024-08-05T22:24:18.881875017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:18.882544 containerd[1439]: time="2024-08-05T22:24:18.881910193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:18.975864 systemd[1]: Started cri-containerd-41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35.scope - libcontainer container 41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35. Aug 5 22:24:18.983107 containerd[1439]: time="2024-08-05T22:24:18.982849238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ndrgp,Uid:39d442b1-b745-42d5-aa8f-305f09d7421b,Namespace:kube-system,Attempt:1,} returns sandbox id \"f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73\"" Aug 5 22:24:18.987088 kubelet[2541]: E0805 22:24:18.984621 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:18.987367 containerd[1439]: time="2024-08-05T22:24:18.987199198Z" level=info msg="CreateContainer within sandbox \"f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:24:19.013484 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:24:19.105037 containerd[1439]: time="2024-08-05T22:24:19.103659244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vbwsm,Uid:7d175971-007a-4ccf-8d7c-0d4fdd69885a,Namespace:kube-system,Attempt:1,} returns sandbox id \"41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35\"" Aug 5 22:24:19.105509 kubelet[2541]: E0805 22:24:19.105468 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:19.115477 containerd[1439]: time="2024-08-05T22:24:19.114528119Z" level=info msg="CreateContainer within sandbox \"41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:24:19.192710 containerd[1439]: time="2024-08-05T22:24:19.191555182Z" level=info msg="CreateContainer within sandbox \"f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d3c6a63e332bc47bb23c336941557e81ee18d9d335898eb4b94a4f59000b5c5\"" Aug 5 22:24:19.193388 containerd[1439]: time="2024-08-05T22:24:19.193278743Z" level=info msg="StartContainer for \"7d3c6a63e332bc47bb23c336941557e81ee18d9d335898eb4b94a4f59000b5c5\"" Aug 5 22:24:19.286846 systemd[1]: Started cri-containerd-7d3c6a63e332bc47bb23c336941557e81ee18d9d335898eb4b94a4f59000b5c5.scope - libcontainer container 7d3c6a63e332bc47bb23c336941557e81ee18d9d335898eb4b94a4f59000b5c5. Aug 5 22:24:19.394141 containerd[1439]: time="2024-08-05T22:24:19.393526221Z" level=info msg="CreateContainer within sandbox \"41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37415a61c5b177c290e231cb3ab13abd3dff1255c64ef5c9914dde83cf79aaa2\"" Aug 5 22:24:19.394487 containerd[1439]: time="2024-08-05T22:24:19.394454437Z" level=info msg="StartContainer for \"37415a61c5b177c290e231cb3ab13abd3dff1255c64ef5c9914dde83cf79aaa2\"" Aug 5 22:24:19.469731 containerd[1439]: time="2024-08-05T22:24:19.469640208Z" level=info msg="StartContainer for \"7d3c6a63e332bc47bb23c336941557e81ee18d9d335898eb4b94a4f59000b5c5\" returns successfully" Aug 5 22:24:19.469803 systemd-networkd[1380]: cali036d5724a46: Gained IPv6LL Aug 5 22:24:19.515742 systemd[1]: Started cri-containerd-37415a61c5b177c290e231cb3ab13abd3dff1255c64ef5c9914dde83cf79aaa2.scope - libcontainer container 37415a61c5b177c290e231cb3ab13abd3dff1255c64ef5c9914dde83cf79aaa2. Aug 5 22:24:19.618313 containerd[1439]: time="2024-08-05T22:24:19.616806246Z" level=info msg="StartContainer for \"37415a61c5b177c290e231cb3ab13abd3dff1255c64ef5c9914dde83cf79aaa2\" returns successfully" Aug 5 22:24:19.671836 kubelet[2541]: E0805 22:24:19.671590 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:19.678894 kubelet[2541]: E0805 22:24:19.678843 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:19.756214 kubelet[2541]: I0805 22:24:19.754151 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ndrgp" podStartSLOduration=65.75409459 podStartE2EDuration="1m5.75409459s" podCreationTimestamp="2024-08-05 22:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:24:19.753980515 +0000 UTC m=+78.082718053" watchObservedRunningTime="2024-08-05 22:24:19.75409459 +0000 UTC m=+78.082832138" Aug 5 22:24:19.756214 kubelet[2541]: I0805 22:24:19.754570 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vbwsm" podStartSLOduration=65.75454693 podStartE2EDuration="1m5.75454693s" podCreationTimestamp="2024-08-05 22:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:24:19.711270925 +0000 UTC m=+78.040008463" watchObservedRunningTime="2024-08-05 22:24:19.75454693 +0000 UTC m=+78.083284469" Aug 5 22:24:20.031594 systemd-networkd[1380]: cali7d7d9503211: Gained IPv6LL Aug 5 22:24:20.431028 systemd-networkd[1380]: calice2b774dbbd: Gained IPv6LL Aug 5 22:24:20.696970 kubelet[2541]: E0805 22:24:20.694438 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:20.706764 kubelet[2541]: E0805 22:24:20.706720 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:20.806162 kubelet[2541]: E0805 22:24:20.804117 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:21.430725 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:49456.service - OpenSSH per-connection server daemon (10.0.0.1:49456). Aug 5 22:24:21.573231 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 49456 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:21.576460 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:21.606379 systemd-logind[1426]: New session 14 of user core. Aug 5 22:24:21.613649 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:24:21.698290 kubelet[2541]: E0805 22:24:21.698144 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:21.699094 kubelet[2541]: E0805 22:24:21.699056 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:22.147952 sshd[4532]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:22.182435 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:49456.service: Deactivated successfully. Aug 5 22:24:22.187445 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:24:22.193575 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:24:22.214851 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:49470.service - OpenSSH per-connection server daemon (10.0.0.1:49470). Aug 5 22:24:22.222883 systemd-logind[1426]: Removed session 14. Aug 5 22:24:22.323365 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:22.324730 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:22.365593 systemd-logind[1426]: New session 15 of user core. Aug 5 22:24:22.377846 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:24:22.707444 kubelet[2541]: E0805 22:24:22.707404 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:22.785272 sshd[4551]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:22.794651 containerd[1439]: time="2024-08-05T22:24:22.793767308Z" level=info msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\"" Aug 5 22:24:22.808522 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:49470.service: Deactivated successfully. Aug 5 22:24:22.812313 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:24:22.816536 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:24:22.849635 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:49472.service - OpenSSH per-connection server daemon (10.0.0.1:49472). Aug 5 22:24:22.853183 systemd-logind[1426]: Removed session 15. Aug 5 22:24:22.997156 sshd[4579]: Accepted publickey for core from 10.0.0.1 port 49472 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:22.996596 sshd[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:23.039722 systemd-logind[1426]: New session 16 of user core. Aug 5 22:24:23.043590 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.012 [INFO][4580] k8s.go 608: Cleaning up netns ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.012 [INFO][4580] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" iface="eth0" netns="/var/run/netns/cni-06988538-3243-f082-6b3a-399b9f94a16a" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.012 [INFO][4580] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" iface="eth0" netns="/var/run/netns/cni-06988538-3243-f082-6b3a-399b9f94a16a" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.016 [INFO][4580] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" iface="eth0" netns="/var/run/netns/cni-06988538-3243-f082-6b3a-399b9f94a16a" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.017 [INFO][4580] k8s.go 615: Releasing IP address(es) ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.035 [INFO][4580] utils.go 188: Calico CNI releasing IP address ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.119 [INFO][4589] ipam_plugin.go 411: Releasing address using handleID ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.119 [INFO][4589] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.120 [INFO][4589] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.128 [WARNING][4589] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.128 [INFO][4589] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.132 [INFO][4589] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:23.154208 containerd[1439]: 2024-08-05 22:24:23.139 [INFO][4580] k8s.go 621: Teardown processing complete. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:24:23.154208 containerd[1439]: time="2024-08-05T22:24:23.150599023Z" level=info msg="TearDown network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" successfully" Aug 5 22:24:23.154208 containerd[1439]: time="2024-08-05T22:24:23.150663885Z" level=info msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" returns successfully" Aug 5 22:24:23.159073 containerd[1439]: time="2024-08-05T22:24:23.155813966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bspg,Uid:eeaa21b6-36ba-499f-950d-4a8d4e76928e,Namespace:calico-system,Attempt:1,}" Aug 5 22:24:23.164074 systemd[1]: run-netns-cni\x2d06988538\x2d3243\x2df082\x2d6b3a\x2d399b9f94a16a.mount: Deactivated successfully. Aug 5 22:24:23.481877 sshd[4579]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:23.492694 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:49472.service: Deactivated successfully. Aug 5 22:24:23.500914 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:24:23.511771 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:24:23.516332 systemd-logind[1426]: Removed session 16. Aug 5 22:24:23.909122 containerd[1439]: time="2024-08-05T22:24:23.908137208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:23.937725 containerd[1439]: time="2024-08-05T22:24:23.933162288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:24:23.940902 containerd[1439]: time="2024-08-05T22:24:23.940241636Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:23.952931 containerd[1439]: time="2024-08-05T22:24:23.944448183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:23.952931 containerd[1439]: time="2024-08-05T22:24:23.945479763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 5.658859376s" Aug 5 22:24:23.952931 containerd[1439]: time="2024-08-05T22:24:23.945514869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:24:23.979424 containerd[1439]: time="2024-08-05T22:24:23.975196351Z" level=info msg="CreateContainer within sandbox \"9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:24:24.064626 containerd[1439]: time="2024-08-05T22:24:24.064549612Z" level=info msg="CreateContainer within sandbox \"9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b47b6315e441080b2d27dfef53a179276bd065ca1bb3454d60fdacd85c82fec0\"" Aug 5 22:24:24.068397 containerd[1439]: time="2024-08-05T22:24:24.065792750Z" level=info msg="StartContainer for \"b47b6315e441080b2d27dfef53a179276bd065ca1bb3454d60fdacd85c82fec0\"" Aug 5 22:24:24.175562 systemd-networkd[1380]: califc518856891: Link UP Aug 5 22:24:24.188846 systemd-networkd[1380]: califc518856891: Gained carrier Aug 5 22:24:24.299432 systemd[1]: run-containerd-runc-k8s.io-b47b6315e441080b2d27dfef53a179276bd065ca1bb3454d60fdacd85c82fec0-runc.DPLDoo.mount: Deactivated successfully. Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.733 [INFO][4605] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2bspg-eth0 csi-node-driver- calico-system eeaa21b6-36ba-499f-950d-4a8d4e76928e 1005 0 2024-08-05 22:23:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-2bspg eth0 default [] [] [kns.calico-system ksa.calico-system.default] califc518856891 [] []}} ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.734 [INFO][4605] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.969 [INFO][4623] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" HandleID="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.992 [INFO][4623] ipam_plugin.go 264: Auto assigning IP ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" HandleID="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000174750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2bspg", "timestamp":"2024-08-05 22:24:23.969586556 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.992 [INFO][4623] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.992 [INFO][4623] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.993 [INFO][4623] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:23.997 [INFO][4623] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.012 [INFO][4623] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.030 [INFO][4623] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.042 [INFO][4623] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.059 [INFO][4623] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.059 [INFO][4623] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.068 [INFO][4623] ipam.go 1685: Creating new handle: k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6 Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.090 [INFO][4623] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.144 [INFO][4623] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.145 [INFO][4623] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" host="localhost" Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.145 [INFO][4623] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:24:24.340029 containerd[1439]: 2024-08-05 22:24:24.145 [INFO][4623] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" HandleID="k8s-pod-network.f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:24.349307 containerd[1439]: 2024-08-05 22:24:24.160 [INFO][4605] k8s.go 386: Populated endpoint ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2bspg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeaa21b6-36ba-499f-950d-4a8d4e76928e", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2bspg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califc518856891", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:24.349307 containerd[1439]: 2024-08-05 22:24:24.160 [INFO][4605] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:24.349307 containerd[1439]: 2024-08-05 22:24:24.160 [INFO][4605] dataplane_linux.go 68: Setting the host side veth name to califc518856891 ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:24.349307 containerd[1439]: 2024-08-05 22:24:24.174 [INFO][4605] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:24.349307 containerd[1439]: 2024-08-05 22:24:24.174 [INFO][4605] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2bspg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeaa21b6-36ba-499f-950d-4a8d4e76928e", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6", Pod:"csi-node-driver-2bspg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califc518856891", MAC:"8a:ae:2f:09:36:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:24:24.349307 containerd[1439]: 2024-08-05 22:24:24.315 [INFO][4605] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6" Namespace="calico-system" Pod="csi-node-driver-2bspg" WorkloadEndpoint="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:24:24.428726 systemd[1]: Started cri-containerd-b47b6315e441080b2d27dfef53a179276bd065ca1bb3454d60fdacd85c82fec0.scope - libcontainer container b47b6315e441080b2d27dfef53a179276bd065ca1bb3454d60fdacd85c82fec0. Aug 5 22:24:24.545880 containerd[1439]: time="2024-08-05T22:24:24.545521162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:24.545880 containerd[1439]: time="2024-08-05T22:24:24.545634745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:24.545880 containerd[1439]: time="2024-08-05T22:24:24.545681903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:24.548312 containerd[1439]: time="2024-08-05T22:24:24.545698023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:24.621956 systemd[1]: Started cri-containerd-f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6.scope - libcontainer container f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6. Aug 5 22:24:24.681983 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:24:24.690827 containerd[1439]: time="2024-08-05T22:24:24.689148625Z" level=info msg="StartContainer for \"b47b6315e441080b2d27dfef53a179276bd065ca1bb3454d60fdacd85c82fec0\" returns successfully" Aug 5 22:24:24.781828 containerd[1439]: time="2024-08-05T22:24:24.773018257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2bspg,Uid:eeaa21b6-36ba-499f-950d-4a8d4e76928e,Namespace:calico-system,Attempt:1,} returns sandbox id \"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6\"" Aug 5 22:24:24.797186 containerd[1439]: time="2024-08-05T22:24:24.790903434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:24:25.013500 kubelet[2541]: I0805 22:24:25.012777 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69544c9d74-xldcd" podStartSLOduration=56.351357731 podStartE2EDuration="1m2.012668816s" podCreationTimestamp="2024-08-05 22:23:23 +0000 UTC" firstStartedPulling="2024-08-05 22:24:18.286167915 +0000 UTC m=+76.614905463" lastFinishedPulling="2024-08-05 22:24:23.94747901 +0000 UTC m=+82.276216548" observedRunningTime="2024-08-05 22:24:24.806289434 +0000 UTC m=+83.135026972" watchObservedRunningTime="2024-08-05 22:24:25.012668816 +0000 UTC m=+83.341406354" Aug 5 22:24:25.722120 systemd-networkd[1380]: califc518856891: Gained IPv6LL Aug 5 22:24:27.180262 containerd[1439]: time="2024-08-05T22:24:27.177467973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:27.207278 containerd[1439]: time="2024-08-05T22:24:27.207191784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:24:27.269552 containerd[1439]: time="2024-08-05T22:24:27.269466718Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:27.269745 containerd[1439]: time="2024-08-05T22:24:27.269635134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.478675475s" Aug 5 22:24:27.269745 containerd[1439]: time="2024-08-05T22:24:27.269701419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:24:27.270586 containerd[1439]: time="2024-08-05T22:24:27.270542389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:27.278438 containerd[1439]: time="2024-08-05T22:24:27.277138185Z" level=info msg="CreateContainer within sandbox \"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:24:27.509250 containerd[1439]: time="2024-08-05T22:24:27.509165327Z" level=info msg="CreateContainer within sandbox \"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"81a278ea7a67119af1a63caaa8376586eb598171f6233b3cda417e1c7598c4c8\"" Aug 5 22:24:27.512195 containerd[1439]: time="2024-08-05T22:24:27.512103409Z" level=info msg="StartContainer for \"81a278ea7a67119af1a63caaa8376586eb598171f6233b3cda417e1c7598c4c8\"" Aug 5 22:24:27.620361 systemd[1]: Started cri-containerd-81a278ea7a67119af1a63caaa8376586eb598171f6233b3cda417e1c7598c4c8.scope - libcontainer container 81a278ea7a67119af1a63caaa8376586eb598171f6233b3cda417e1c7598c4c8. Aug 5 22:24:27.732390 containerd[1439]: time="2024-08-05T22:24:27.732321218Z" level=info msg="StartContainer for \"81a278ea7a67119af1a63caaa8376586eb598171f6233b3cda417e1c7598c4c8\" returns successfully" Aug 5 22:24:27.739936 containerd[1439]: time="2024-08-05T22:24:27.738662715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:24:28.516681 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:49482.service - OpenSSH per-connection server daemon (10.0.0.1:49482). Aug 5 22:24:28.845577 sshd[4792]: Accepted publickey for core from 10.0.0.1 port 49482 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:28.846527 sshd[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:28.889124 systemd-logind[1426]: New session 17 of user core. Aug 5 22:24:28.902770 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:24:29.275377 sshd[4792]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:29.283070 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:24:29.284629 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:49482.service: Deactivated successfully. Aug 5 22:24:29.290285 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:24:29.293532 systemd-logind[1426]: Removed session 17. Aug 5 22:24:29.798802 kubelet[2541]: E0805 22:24:29.798007 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:30.613194 containerd[1439]: time="2024-08-05T22:24:30.604875690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:30.613194 containerd[1439]: time="2024-08-05T22:24:30.605400647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:24:30.619852 containerd[1439]: time="2024-08-05T22:24:30.618473922Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:30.635671 containerd[1439]: time="2024-08-05T22:24:30.632134972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:30.635671 containerd[1439]: time="2024-08-05T22:24:30.633106317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.89403244s" Aug 5 22:24:30.635671 containerd[1439]: time="2024-08-05T22:24:30.633138828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:24:30.650574 containerd[1439]: time="2024-08-05T22:24:30.650522493Z" level=info msg="CreateContainer within sandbox \"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:24:30.732807 containerd[1439]: time="2024-08-05T22:24:30.732273367Z" level=info msg="CreateContainer within sandbox \"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"abf952a8581415f8c4ee6bdf99bd797ac3c8216d54a758fe8945218b73ca9548\"" Aug 5 22:24:30.735739 containerd[1439]: time="2024-08-05T22:24:30.733222390Z" level=info msg="StartContainer for \"abf952a8581415f8c4ee6bdf99bd797ac3c8216d54a758fe8945218b73ca9548\"" Aug 5 22:24:30.808707 systemd[1]: Started cri-containerd-abf952a8581415f8c4ee6bdf99bd797ac3c8216d54a758fe8945218b73ca9548.scope - libcontainer container abf952a8581415f8c4ee6bdf99bd797ac3c8216d54a758fe8945218b73ca9548. Aug 5 22:24:31.008074 containerd[1439]: time="2024-08-05T22:24:31.005154183Z" level=info msg="StartContainer for \"abf952a8581415f8c4ee6bdf99bd797ac3c8216d54a758fe8945218b73ca9548\" returns successfully" Aug 5 22:24:31.054346 kubelet[2541]: I0805 22:24:31.053201 2541 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:24:31.056757 kubelet[2541]: I0805 22:24:31.056716 2541 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:24:34.335288 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:48808.service - OpenSSH per-connection server daemon (10.0.0.1:48808). Aug 5 22:24:34.565145 sshd[4858]: Accepted publickey for core from 10.0.0.1 port 48808 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:34.570001 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:34.602540 systemd-logind[1426]: New session 18 of user core. Aug 5 22:24:34.641559 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:24:34.793627 kubelet[2541]: E0805 22:24:34.792888 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:35.133190 sshd[4858]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:35.137990 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:48808.service: Deactivated successfully. Aug 5 22:24:35.145425 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:24:35.170405 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:24:35.175743 systemd-logind[1426]: Removed session 18. Aug 5 22:24:40.154714 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:48816.service - OpenSSH per-connection server daemon (10.0.0.1:48816). Aug 5 22:24:40.236324 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 48816 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:40.237117 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:40.255090 systemd-logind[1426]: New session 19 of user core. Aug 5 22:24:40.263696 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:24:40.552999 sshd[4879]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:40.560046 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:48816.service: Deactivated successfully. Aug 5 22:24:40.564036 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:24:40.586842 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:24:40.603133 systemd-logind[1426]: Removed session 19. Aug 5 22:24:41.809470 kubelet[2541]: E0805 22:24:41.797936 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:43.813578 kubelet[2541]: E0805 22:24:43.810193 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:24:45.603339 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:37726.service - OpenSSH per-connection server daemon (10.0.0.1:37726). Aug 5 22:24:45.808341 sshd[4925]: Accepted publickey for core from 10.0.0.1 port 37726 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:45.813799 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:45.864611 systemd-logind[1426]: New session 20 of user core. Aug 5 22:24:45.904804 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:24:46.187660 sshd[4925]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:46.196123 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:37726.service: Deactivated successfully. Aug 5 22:24:46.201341 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:24:46.219321 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:24:46.223677 systemd-logind[1426]: Removed session 20. Aug 5 22:24:51.282272 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:46876.service - OpenSSH per-connection server daemon (10.0.0.1:46876). Aug 5 22:24:51.415646 sshd[4985]: Accepted publickey for core from 10.0.0.1 port 46876 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:51.431086 sshd[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:51.468460 systemd-logind[1426]: New session 21 of user core. Aug 5 22:24:51.486012 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:24:51.881371 sshd[4985]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:51.893658 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:46876.service: Deactivated successfully. Aug 5 22:24:51.898959 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:24:51.900571 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:24:51.902486 systemd-logind[1426]: Removed session 21. Aug 5 22:24:56.945614 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:46888.service - OpenSSH per-connection server daemon (10.0.0.1:46888). Aug 5 22:24:57.010959 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 46888 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:24:57.022198 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:24:57.051943 systemd-logind[1426]: New session 22 of user core. Aug 5 22:24:57.069344 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:24:57.398874 sshd[5010]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:57.410721 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:46888.service: Deactivated successfully. Aug 5 22:24:57.420046 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:24:57.423169 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:24:57.424650 systemd-logind[1426]: Removed session 22. Aug 5 22:25:01.783043 containerd[1439]: time="2024-08-05T22:25:01.782472703Z" level=info msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\"" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.007 [WARNING][5040] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0", GenerateName:"calico-kube-controllers-69544c9d74-", Namespace:"calico-system", SelfLink:"", UID:"fac418ef-ee88-4feb-ba03-6949e13d198f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69544c9d74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40", Pod:"calico-kube-controllers-69544c9d74-xldcd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali036d5724a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.007 [INFO][5040] k8s.go 608: Cleaning up netns ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.007 [INFO][5040] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" iface="eth0" netns="" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.007 [INFO][5040] k8s.go 615: Releasing IP address(es) ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.007 [INFO][5040] utils.go 188: Calico CNI releasing IP address ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.065 [INFO][5047] ipam_plugin.go 411: Releasing address using handleID ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.066 [INFO][5047] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.067 [INFO][5047] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.088 [WARNING][5047] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.088 [INFO][5047] ipam_plugin.go 439: Releasing address using workloadID ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.097 [INFO][5047] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:02.111347 containerd[1439]: 2024-08-05 22:25:02.103 [INFO][5040] k8s.go 621: Teardown processing complete. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.111347 containerd[1439]: time="2024-08-05T22:25:02.111067464Z" level=info msg="TearDown network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" successfully" Aug 5 22:25:02.111347 containerd[1439]: time="2024-08-05T22:25:02.111104835Z" level=info msg="StopPodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" returns successfully" Aug 5 22:25:02.138702 containerd[1439]: time="2024-08-05T22:25:02.138377665Z" level=info msg="RemovePodSandbox for \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\"" Aug 5 22:25:02.146442 containerd[1439]: time="2024-08-05T22:25:02.145112469Z" level=info msg="Forcibly stopping sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\"" Aug 5 22:25:02.423586 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:40468.service - OpenSSH per-connection server daemon (10.0.0.1:40468). Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.320 [WARNING][5070] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0", GenerateName:"calico-kube-controllers-69544c9d74-", Namespace:"calico-system", SelfLink:"", UID:"fac418ef-ee88-4feb-ba03-6949e13d198f", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69544c9d74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9082821c451d09fb07a60c72d20f4330818ec10603f086b21886f27bffc4de40", Pod:"calico-kube-controllers-69544c9d74-xldcd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali036d5724a46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.320 [INFO][5070] k8s.go 608: Cleaning up netns ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.323 [INFO][5070] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" iface="eth0" netns="" Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.323 [INFO][5070] k8s.go 615: Releasing IP address(es) ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.323 [INFO][5070] utils.go 188: Calico CNI releasing IP address ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.385 [INFO][5078] ipam_plugin.go 411: Releasing address using handleID ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.385 [INFO][5078] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.385 [INFO][5078] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.409 [WARNING][5078] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.409 [INFO][5078] ipam_plugin.go 439: Releasing address using workloadID ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" HandleID="k8s-pod-network.99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Workload="localhost-k8s-calico--kube--controllers--69544c9d74--xldcd-eth0" Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.418 [INFO][5078] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:02.449455 containerd[1439]: 2024-08-05 22:25:02.428 [INFO][5070] k8s.go 621: Teardown processing complete. ContainerID="99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae" Aug 5 22:25:02.449455 containerd[1439]: time="2024-08-05T22:25:02.446073445Z" level=info msg="TearDown network for sandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" successfully" Aug 5 22:25:02.484304 containerd[1439]: time="2024-08-05T22:25:02.484053948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:02.484304 containerd[1439]: time="2024-08-05T22:25:02.484164785Z" level=info msg="RemovePodSandbox \"99ff8338bf6479ef6dd76ea45e4dc4d4c12ab73335b0999921d7fca22ac057ae\" returns successfully" Aug 5 22:25:02.485461 containerd[1439]: time="2024-08-05T22:25:02.484824454Z" level=info msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\"" Aug 5 22:25:02.510257 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 40468 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:02.518208 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:02.536618 systemd-logind[1426]: New session 23 of user core. Aug 5 22:25:02.552753 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.612 [WARNING][5102] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--vbwsm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7d175971-007a-4ccf-8d7c-0d4fdd69885a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35", Pod:"coredns-76f75df574-vbwsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice2b774dbbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.612 [INFO][5102] k8s.go 608: Cleaning up netns ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.612 [INFO][5102] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" iface="eth0" netns="" Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.612 [INFO][5102] k8s.go 615: Releasing IP address(es) ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.612 [INFO][5102] utils.go 188: Calico CNI releasing IP address ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.706 [INFO][5110] ipam_plugin.go 411: Releasing address using handleID ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.706 [INFO][5110] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.706 [INFO][5110] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.720 [WARNING][5110] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.720 [INFO][5110] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.724 [INFO][5110] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:02.740533 containerd[1439]: 2024-08-05 22:25:02.732 [INFO][5102] k8s.go 621: Teardown processing complete. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.740533 containerd[1439]: time="2024-08-05T22:25:02.740399705Z" level=info msg="TearDown network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" successfully" Aug 5 22:25:02.741206 containerd[1439]: time="2024-08-05T22:25:02.740978843Z" level=info msg="StopPodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" returns successfully" Aug 5 22:25:02.749066 containerd[1439]: time="2024-08-05T22:25:02.747332070Z" level=info msg="RemovePodSandbox for \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\"" Aug 5 22:25:02.754324 containerd[1439]: time="2024-08-05T22:25:02.753993005Z" level=info msg="Forcibly stopping sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\"" Aug 5 22:25:02.852743 sshd[5087]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:02.866696 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:40468.service: Deactivated successfully. Aug 5 22:25:02.873315 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:25:02.875497 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:25:02.877676 systemd-logind[1426]: Removed session 23. Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.885 [WARNING][5141] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--vbwsm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7d175971-007a-4ccf-8d7c-0d4fdd69885a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41b3ad725de20dd68a3d5a0b2a3755e2f3222bfb03751866de5a32113cde1b35", Pod:"coredns-76f75df574-vbwsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice2b774dbbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.886 [INFO][5141] k8s.go 608: Cleaning up netns ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.886 [INFO][5141] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" iface="eth0" netns="" Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.886 [INFO][5141] k8s.go 615: Releasing IP address(es) ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.886 [INFO][5141] utils.go 188: Calico CNI releasing IP address ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.917 [INFO][5150] ipam_plugin.go 411: Releasing address using handleID ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.918 [INFO][5150] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.918 [INFO][5150] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.931 [WARNING][5150] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.931 [INFO][5150] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" HandleID="k8s-pod-network.bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Workload="localhost-k8s-coredns--76f75df574--vbwsm-eth0" Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.939 [INFO][5150] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:02.949321 containerd[1439]: 2024-08-05 22:25:02.942 [INFO][5141] k8s.go 621: Teardown processing complete. ContainerID="bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da" Aug 5 22:25:02.949321 containerd[1439]: time="2024-08-05T22:25:02.947102988Z" level=info msg="TearDown network for sandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" successfully" Aug 5 22:25:02.963191 containerd[1439]: time="2024-08-05T22:25:02.961866264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:02.963191 containerd[1439]: time="2024-08-05T22:25:02.961993132Z" level=info msg="RemovePodSandbox \"bcaff0440e4320fec32580bc79c4b85f6b7bb2b1692af4d2ed4be19e4961e6da\" returns successfully" Aug 5 22:25:02.964051 containerd[1439]: time="2024-08-05T22:25:02.963989500Z" level=info msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\"" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.108 [WARNING][5171] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ndrgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39d442b1-b745-42d5-aa8f-305f09d7421b", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73", Pod:"coredns-76f75df574-ndrgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d7d9503211", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.108 [INFO][5171] k8s.go 608: Cleaning up netns ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.108 [INFO][5171] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" iface="eth0" netns="" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.108 [INFO][5171] k8s.go 615: Releasing IP address(es) ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.108 [INFO][5171] utils.go 188: Calico CNI releasing IP address ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.157 [INFO][5179] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.157 [INFO][5179] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.157 [INFO][5179] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.181 [WARNING][5179] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.182 [INFO][5179] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.192 [INFO][5179] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.205248 containerd[1439]: 2024-08-05 22:25:03.198 [INFO][5171] k8s.go 621: Teardown processing complete. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.205248 containerd[1439]: time="2024-08-05T22:25:03.203613209Z" level=info msg="TearDown network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" successfully" Aug 5 22:25:03.205248 containerd[1439]: time="2024-08-05T22:25:03.203645300Z" level=info msg="StopPodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" returns successfully" Aug 5 22:25:03.211173 containerd[1439]: time="2024-08-05T22:25:03.211076811Z" level=info msg="RemovePodSandbox for \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\"" Aug 5 22:25:03.211173 containerd[1439]: time="2024-08-05T22:25:03.211126104Z" level=info msg="Forcibly stopping sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\"" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.305 [WARNING][5200] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--ndrgp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"39d442b1-b745-42d5-aa8f-305f09d7421b", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f25105f304475a295f342ec683938e90c92c576cdedeb8f1ca0dc11c23d71e73", Pod:"coredns-76f75df574-ndrgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d7d9503211", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.305 [INFO][5200] k8s.go 608: Cleaning up netns ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.305 [INFO][5200] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" iface="eth0" netns="" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.305 [INFO][5200] k8s.go 615: Releasing IP address(es) ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.305 [INFO][5200] utils.go 188: Calico CNI releasing IP address ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.359 [INFO][5208] ipam_plugin.go 411: Releasing address using handleID ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.360 [INFO][5208] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.360 [INFO][5208] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.369 [WARNING][5208] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.369 [INFO][5208] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" HandleID="k8s-pod-network.1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Workload="localhost-k8s-coredns--76f75df574--ndrgp-eth0" Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.373 [INFO][5208] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.386178 containerd[1439]: 2024-08-05 22:25:03.381 [INFO][5200] k8s.go 621: Teardown processing complete. ContainerID="1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84" Aug 5 22:25:03.386178 containerd[1439]: time="2024-08-05T22:25:03.385752632Z" level=info msg="TearDown network for sandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" successfully" Aug 5 22:25:03.393933 containerd[1439]: time="2024-08-05T22:25:03.393663753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:03.393933 containerd[1439]: time="2024-08-05T22:25:03.393792895Z" level=info msg="RemovePodSandbox \"1a2cca21cc9d5fa8a965ffa450c157d074298e50d1cccfd67668099f55356f84\" returns successfully" Aug 5 22:25:03.395256 containerd[1439]: time="2024-08-05T22:25:03.395187083Z" level=info msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\"" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.560 [WARNING][5232] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2bspg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeaa21b6-36ba-499f-950d-4a8d4e76928e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6", Pod:"csi-node-driver-2bspg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califc518856891", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.561 [INFO][5232] k8s.go 608: Cleaning up netns ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.561 [INFO][5232] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" iface="eth0" netns="" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.561 [INFO][5232] k8s.go 615: Releasing IP address(es) ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.561 [INFO][5232] utils.go 188: Calico CNI releasing IP address ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.609 [INFO][5239] ipam_plugin.go 411: Releasing address using handleID ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.610 [INFO][5239] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.610 [INFO][5239] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.630 [WARNING][5239] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.631 [INFO][5239] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.641 [INFO][5239] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.653563 containerd[1439]: 2024-08-05 22:25:03.647 [INFO][5232] k8s.go 621: Teardown processing complete. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.653563 containerd[1439]: time="2024-08-05T22:25:03.653423713Z" level=info msg="TearDown network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" successfully" Aug 5 22:25:03.653563 containerd[1439]: time="2024-08-05T22:25:03.653456105Z" level=info msg="StopPodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" returns successfully" Aug 5 22:25:03.658962 containerd[1439]: time="2024-08-05T22:25:03.654642922Z" level=info msg="RemovePodSandbox for \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\"" Aug 5 22:25:03.658962 containerd[1439]: time="2024-08-05T22:25:03.654676245Z" level=info msg="Forcibly stopping sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\"" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.783 [WARNING][5261] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2bspg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eeaa21b6-36ba-499f-950d-4a8d4e76928e", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8f7d254313e6490ee83baaa114fdf2ce8ea933edf095439caa27caeb8cb54a6", Pod:"csi-node-driver-2bspg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califc518856891", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.783 [INFO][5261] k8s.go 608: Cleaning up netns ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.783 [INFO][5261] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" iface="eth0" netns="" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.783 [INFO][5261] k8s.go 615: Releasing IP address(es) ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.783 [INFO][5261] utils.go 188: Calico CNI releasing IP address ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.871 [INFO][5268] ipam_plugin.go 411: Releasing address using handleID ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.874 [INFO][5268] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.874 [INFO][5268] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.886 [WARNING][5268] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.887 [INFO][5268] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" HandleID="k8s-pod-network.b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Workload="localhost-k8s-csi--node--driver--2bspg-eth0" Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.891 [INFO][5268] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.912934 containerd[1439]: 2024-08-05 22:25:03.897 [INFO][5261] k8s.go 621: Teardown processing complete. ContainerID="b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c" Aug 5 22:25:03.912934 containerd[1439]: time="2024-08-05T22:25:03.909905961Z" level=info msg="TearDown network for sandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" successfully" Aug 5 22:25:03.915369 containerd[1439]: time="2024-08-05T22:25:03.915065817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:03.915369 containerd[1439]: time="2024-08-05T22:25:03.915293455Z" level=info msg="RemovePodSandbox \"b201dbd7d005700acce5f36863643a00ad5817c4ebfd9dd06379e05bffe5a98c\" returns successfully" Aug 5 22:25:07.874488 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:40472.service - OpenSSH per-connection server daemon (10.0.0.1:40472). Aug 5 22:25:07.992149 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 40472 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:07.994051 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:08.033514 systemd-logind[1426]: New session 24 of user core. Aug 5 22:25:08.052845 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:25:08.442776 sshd[5283]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:08.461910 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:40472.service: Deactivated successfully. Aug 5 22:25:08.483750 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:25:08.495982 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:25:08.506269 systemd-logind[1426]: Removed session 24. Aug 5 22:25:08.540254 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:40484.service - OpenSSH per-connection server daemon (10.0.0.1:40484). Aug 5 22:25:08.632472 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 40484 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:08.633923 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:08.680888 systemd-logind[1426]: New session 25 of user core. Aug 5 22:25:08.690754 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:25:09.969767 sshd[5297]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:09.986055 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:40484.service: Deactivated successfully. Aug 5 22:25:09.989409 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:25:09.996081 systemd-logind[1426]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:25:10.009272 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:40498.service - OpenSSH per-connection server daemon (10.0.0.1:40498). Aug 5 22:25:10.016230 systemd-logind[1426]: Removed session 25. Aug 5 22:25:10.092990 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 40498 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:10.093758 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:10.129985 systemd-logind[1426]: New session 26 of user core. Aug 5 22:25:10.139846 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:25:13.713760 sshd[5309]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:13.740920 systemd[1]: Started sshd@26-10.0.0.55:22-10.0.0.1:34628.service - OpenSSH per-connection server daemon (10.0.0.1:34628). Aug 5 22:25:13.741883 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:40498.service: Deactivated successfully. Aug 5 22:25:13.747967 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:25:13.752901 systemd-logind[1426]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:25:13.762311 systemd-logind[1426]: Removed session 26. Aug 5 22:25:13.793879 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 34628 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:13.800280 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:13.830688 systemd-logind[1426]: New session 27 of user core. Aug 5 22:25:13.844840 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:25:14.455678 sshd[5329]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:14.484276 systemd[1]: sshd@26-10.0.0.55:22-10.0.0.1:34628.service: Deactivated successfully. Aug 5 22:25:14.495887 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:25:14.508124 systemd-logind[1426]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:25:14.518119 systemd[1]: Started sshd@27-10.0.0.55:22-10.0.0.1:34644.service - OpenSSH per-connection server daemon (10.0.0.1:34644). Aug 5 22:25:14.520556 systemd-logind[1426]: Removed session 27. Aug 5 22:25:14.586450 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 34644 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:14.590658 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:14.607633 systemd-logind[1426]: New session 28 of user core. Aug 5 22:25:14.613790 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:25:14.876521 sshd[5344]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:14.884945 systemd[1]: sshd@27-10.0.0.55:22-10.0.0.1:34644.service: Deactivated successfully. Aug 5 22:25:14.888552 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:25:14.891580 systemd-logind[1426]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:25:14.892938 systemd-logind[1426]: Removed session 28. Aug 5 22:25:19.911591 systemd[1]: Started sshd@28-10.0.0.55:22-10.0.0.1:34658.service - OpenSSH per-connection server daemon (10.0.0.1:34658). Aug 5 22:25:20.003543 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 34658 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:20.004274 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:20.024477 systemd-logind[1426]: New session 29 of user core. Aug 5 22:25:20.030623 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 5 22:25:20.137491 systemd[1]: run-containerd-runc-k8s.io-b47b6315e441080b2d27dfef53a179276bd065ca1bb3454d60fdacd85c82fec0-runc.Mvrje8.mount: Deactivated successfully. Aug 5 22:25:20.303744 sshd[5391]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:20.315842 systemd[1]: sshd@28-10.0.0.55:22-10.0.0.1:34658.service: Deactivated successfully. Aug 5 22:25:20.326630 systemd[1]: session-29.scope: Deactivated successfully. Aug 5 22:25:20.333484 systemd-logind[1426]: Session 29 logged out. Waiting for processes to exit. Aug 5 22:25:20.338237 systemd-logind[1426]: Removed session 29. Aug 5 22:25:24.048412 kubelet[2541]: I0805 22:25:24.046916 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-2bspg" podStartSLOduration=115.201214654 podStartE2EDuration="2m1.046861479s" podCreationTimestamp="2024-08-05 22:23:23 +0000 UTC" firstStartedPulling="2024-08-05 22:24:24.790556592 +0000 UTC m=+83.119294140" lastFinishedPulling="2024-08-05 22:24:30.636203427 +0000 UTC m=+88.964940965" observedRunningTime="2024-08-05 22:24:31.842676338 +0000 UTC m=+90.171413876" watchObservedRunningTime="2024-08-05 22:25:24.046861479 +0000 UTC m=+142.375599017" Aug 5 22:25:24.048412 kubelet[2541]: I0805 22:25:24.047158 2541 topology_manager.go:215] "Topology Admit Handler" podUID="6acfe2af-cddd-4d2c-8211-cc687c10e66d" podNamespace="calico-apiserver" podName="calico-apiserver-5b5c4c7666-89zbd" Aug 5 22:25:24.060849 systemd[1]: Created slice kubepods-besteffort-pod6acfe2af_cddd_4d2c_8211_cc687c10e66d.slice - libcontainer container kubepods-besteffort-pod6acfe2af_cddd_4d2c_8211_cc687c10e66d.slice. Aug 5 22:25:24.184459 kubelet[2541]: I0805 22:25:24.183606 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6acfe2af-cddd-4d2c-8211-cc687c10e66d-calico-apiserver-certs\") pod \"calico-apiserver-5b5c4c7666-89zbd\" (UID: \"6acfe2af-cddd-4d2c-8211-cc687c10e66d\") " pod="calico-apiserver/calico-apiserver-5b5c4c7666-89zbd" Aug 5 22:25:24.184459 kubelet[2541]: I0805 22:25:24.183690 2541 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqdz6\" (UniqueName: \"kubernetes.io/projected/6acfe2af-cddd-4d2c-8211-cc687c10e66d-kube-api-access-bqdz6\") pod \"calico-apiserver-5b5c4c7666-89zbd\" (UID: \"6acfe2af-cddd-4d2c-8211-cc687c10e66d\") " pod="calico-apiserver/calico-apiserver-5b5c4c7666-89zbd" Aug 5 22:25:24.290478 kubelet[2541]: E0805 22:25:24.288877 2541 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:25:24.290478 kubelet[2541]: E0805 22:25:24.289008 2541 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6acfe2af-cddd-4d2c-8211-cc687c10e66d-calico-apiserver-certs podName:6acfe2af-cddd-4d2c-8211-cc687c10e66d nodeName:}" failed. No retries permitted until 2024-08-05 22:25:24.788980852 +0000 UTC m=+143.117718390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6acfe2af-cddd-4d2c-8211-cc687c10e66d-calico-apiserver-certs") pod "calico-apiserver-5b5c4c7666-89zbd" (UID: "6acfe2af-cddd-4d2c-8211-cc687c10e66d") : secret "calico-apiserver-certs" not found Aug 5 22:25:24.794290 kubelet[2541]: E0805 22:25:24.794158 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:25:24.971793 containerd[1439]: time="2024-08-05T22:25:24.971024078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5c4c7666-89zbd,Uid:6acfe2af-cddd-4d2c-8211-cc687c10e66d,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:25:25.388707 systemd[1]: Started sshd@29-10.0.0.55:22-10.0.0.1:45754.service - OpenSSH per-connection server daemon (10.0.0.1:45754). Aug 5 22:25:25.546791 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 45754 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:25.554402 sshd[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:25.596692 systemd-logind[1426]: New session 30 of user core. Aug 5 22:25:25.646181 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 5 22:25:26.062922 sshd[5434]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:26.077120 systemd[1]: sshd@29-10.0.0.55:22-10.0.0.1:45754.service: Deactivated successfully. Aug 5 22:25:26.082345 systemd[1]: session-30.scope: Deactivated successfully. Aug 5 22:25:26.088464 systemd-logind[1426]: Session 30 logged out. Waiting for processes to exit. Aug 5 22:25:26.100092 systemd-logind[1426]: Removed session 30. Aug 5 22:25:26.520113 systemd-networkd[1380]: cali2e58c98b4fe: Link UP Aug 5 22:25:26.520918 systemd-networkd[1380]: cali2e58c98b4fe: Gained carrier Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.017 [INFO][5441] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0 calico-apiserver-5b5c4c7666- calico-apiserver 6acfe2af-cddd-4d2c-8211-cc687c10e66d 1288 0 2024-08-05 22:25:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b5c4c7666 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b5c4c7666-89zbd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e58c98b4fe [] []}} ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.018 [INFO][5441] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.129 [INFO][5460] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" HandleID="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Workload="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.167 [INFO][5460] ipam_plugin.go 264: Auto assigning IP ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" HandleID="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Workload="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003200e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b5c4c7666-89zbd", "timestamp":"2024-08-05 22:25:26.12904758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.168 [INFO][5460] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.168 [INFO][5460] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.168 [INFO][5460] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.178 [INFO][5460] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.273 [INFO][5460] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.329 [INFO][5460] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.340 [INFO][5460] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.351 [INFO][5460] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.351 [INFO][5460] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.361 [INFO][5460] ipam.go 1685: Creating new handle: k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.372 [INFO][5460] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.485 [INFO][5460] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.485 [INFO][5460] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" host="localhost" Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.485 [INFO][5460] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:26.608612 containerd[1439]: 2024-08-05 22:25:26.485 [INFO][5460] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" HandleID="k8s-pod-network.dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Workload="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" Aug 5 22:25:26.610276 containerd[1439]: 2024-08-05 22:25:26.506 [INFO][5441] k8s.go 386: Populated endpoint ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0", GenerateName:"calico-apiserver-5b5c4c7666-", Namespace:"calico-apiserver", SelfLink:"", UID:"6acfe2af-cddd-4d2c-8211-cc687c10e66d", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5c4c7666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b5c4c7666-89zbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e58c98b4fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:26.610276 containerd[1439]: 2024-08-05 22:25:26.509 [INFO][5441] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" Aug 5 22:25:26.610276 containerd[1439]: 2024-08-05 22:25:26.510 [INFO][5441] dataplane_linux.go 68: Setting the host side veth name to cali2e58c98b4fe ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" Aug 5 22:25:26.610276 containerd[1439]: 2024-08-05 22:25:26.520 [INFO][5441] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" Aug 5 22:25:26.610276 containerd[1439]: 2024-08-05 22:25:26.525 [INFO][5441] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0", GenerateName:"calico-apiserver-5b5c4c7666-", Namespace:"calico-apiserver", SelfLink:"", UID:"6acfe2af-cddd-4d2c-8211-cc687c10e66d", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 25, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5c4c7666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba", Pod:"calico-apiserver-5b5c4c7666-89zbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e58c98b4fe", MAC:"2a:1f:d2:ea:55:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:26.610276 containerd[1439]: 2024-08-05 22:25:26.583 [INFO][5441] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba" Namespace="calico-apiserver" Pod="calico-apiserver-5b5c4c7666-89zbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5c4c7666--89zbd-eth0" Aug 5 22:25:26.744057 containerd[1439]: time="2024-08-05T22:25:26.743850729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:25:26.744057 containerd[1439]: time="2024-08-05T22:25:26.743971035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:26.744057 containerd[1439]: time="2024-08-05T22:25:26.744001622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:25:26.744057 containerd[1439]: time="2024-08-05T22:25:26.744021119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:26.842958 systemd[1]: Started cri-containerd-dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba.scope - libcontainer container dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba. Aug 5 22:25:26.907614 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:25:27.009779 containerd[1439]: time="2024-08-05T22:25:27.009094475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5c4c7666-89zbd,Uid:6acfe2af-cddd-4d2c-8211-cc687c10e66d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba\"" Aug 5 22:25:27.017268 containerd[1439]: time="2024-08-05T22:25:27.016916355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:25:27.674892 systemd-networkd[1380]: cali2e58c98b4fe: Gained IPv6LL Aug 5 22:25:29.795523 kubelet[2541]: E0805 22:25:29.795080 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:25:31.104251 systemd[1]: Started sshd@30-10.0.0.55:22-10.0.0.1:50800.service - OpenSSH per-connection server daemon (10.0.0.1:50800). Aug 5 22:25:31.243508 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 50800 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:31.255569 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:31.302181 systemd-logind[1426]: New session 31 of user core. Aug 5 22:25:31.316147 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 5 22:25:31.736432 sshd[5536]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:31.742205 systemd[1]: sshd@30-10.0.0.55:22-10.0.0.1:50800.service: Deactivated successfully. Aug 5 22:25:31.764018 systemd[1]: session-31.scope: Deactivated successfully. Aug 5 22:25:31.781328 systemd-logind[1426]: Session 31 logged out. Waiting for processes to exit. Aug 5 22:25:31.798255 systemd-logind[1426]: Removed session 31. Aug 5 22:25:34.800860 kubelet[2541]: E0805 22:25:34.798800 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:25:35.295974 containerd[1439]: time="2024-08-05T22:25:35.295890717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:35.298933 containerd[1439]: time="2024-08-05T22:25:35.298839261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:25:35.303931 containerd[1439]: time="2024-08-05T22:25:35.303399929Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:35.315458 containerd[1439]: time="2024-08-05T22:25:35.312202028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:35.315458 containerd[1439]: time="2024-08-05T22:25:35.315007654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 8.298029373s" Aug 5 22:25:35.315458 containerd[1439]: time="2024-08-05T22:25:35.315078026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:25:35.323017 containerd[1439]: time="2024-08-05T22:25:35.320788813Z" level=info msg="CreateContainer within sandbox \"dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:25:35.394502 containerd[1439]: time="2024-08-05T22:25:35.394280095Z" level=info msg="CreateContainer within sandbox \"dee6259c8e58d4f64f4ec849ae5bbc4bc8e65182f78939cd51bd3bec5d9e73ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"afebe292ce78f43ebbddf7e1a90eeca6e25a5b6763028277127d7a30c5207be0\"" Aug 5 22:25:35.395382 containerd[1439]: time="2024-08-05T22:25:35.395360363Z" level=info msg="StartContainer for \"afebe292ce78f43ebbddf7e1a90eeca6e25a5b6763028277127d7a30c5207be0\"" Aug 5 22:25:35.533902 systemd[1]: Started cri-containerd-afebe292ce78f43ebbddf7e1a90eeca6e25a5b6763028277127d7a30c5207be0.scope - libcontainer container afebe292ce78f43ebbddf7e1a90eeca6e25a5b6763028277127d7a30c5207be0. Aug 5 22:25:35.673495 containerd[1439]: time="2024-08-05T22:25:35.672063235Z" level=info msg="StartContainer for \"afebe292ce78f43ebbddf7e1a90eeca6e25a5b6763028277127d7a30c5207be0\" returns successfully" Aug 5 22:25:36.836049 systemd[1]: Started sshd@31-10.0.0.55:22-10.0.0.1:50816.service - OpenSSH per-connection server daemon (10.0.0.1:50816). Aug 5 22:25:36.956940 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 50816 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:36.960518 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:36.978800 systemd-logind[1426]: New session 32 of user core. Aug 5 22:25:36.992266 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 5 22:25:37.386180 kubelet[2541]: I0805 22:25:37.384107 2541 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b5c4c7666-89zbd" podStartSLOduration=6.084524483 podStartE2EDuration="14.384044213s" podCreationTimestamp="2024-08-05 22:25:23 +0000 UTC" firstStartedPulling="2024-08-05 22:25:27.016344131 +0000 UTC m=+145.345081669" lastFinishedPulling="2024-08-05 22:25:35.315863861 +0000 UTC m=+153.644601399" observedRunningTime="2024-08-05 22:25:36.345374213 +0000 UTC m=+154.674111751" watchObservedRunningTime="2024-08-05 22:25:37.384044213 +0000 UTC m=+155.712781751" Aug 5 22:25:37.393395 sshd[5614]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:37.405276 systemd[1]: sshd@31-10.0.0.55:22-10.0.0.1:50816.service: Deactivated successfully. Aug 5 22:25:37.413530 systemd[1]: session-32.scope: Deactivated successfully. Aug 5 22:25:37.417263 systemd-logind[1426]: Session 32 logged out. Waiting for processes to exit. Aug 5 22:25:37.419238 systemd-logind[1426]: Removed session 32. Aug 5 22:25:37.794596 kubelet[2541]: E0805 22:25:37.792305 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:25:38.795256 kubelet[2541]: E0805 22:25:38.795123 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:25:42.461554 systemd[1]: Started sshd@32-10.0.0.55:22-10.0.0.1:49460.service - OpenSSH per-connection server daemon (10.0.0.1:49460). Aug 5 22:25:42.550023 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 49460 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:42.550778 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:42.575211 systemd-logind[1426]: New session 33 of user core. Aug 5 22:25:42.596088 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 5 22:25:42.920637 sshd[5640]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:42.939605 systemd[1]: sshd@32-10.0.0.55:22-10.0.0.1:49460.service: Deactivated successfully. Aug 5 22:25:42.947494 systemd[1]: session-33.scope: Deactivated successfully. Aug 5 22:25:42.957971 systemd-logind[1426]: Session 33 logged out. Waiting for processes to exit. Aug 5 22:25:42.962251 systemd-logind[1426]: Removed session 33. Aug 5 22:25:47.995220 systemd[1]: Started sshd@33-10.0.0.55:22-10.0.0.1:49476.service - OpenSSH per-connection server daemon (10.0.0.1:49476). Aug 5 22:25:48.148323 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 49476 ssh2: RSA SHA256:ptvpYoWJLxritDvuuuq7wnHVeQD0cFOU3CO7OKKv9QY Aug 5 22:25:48.152982 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:25:48.180888 systemd-logind[1426]: New session 34 of user core. Aug 5 22:25:48.192774 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 5 22:25:48.602224 sshd[5686]: pam_unix(sshd:session): session closed for user core Aug 5 22:25:48.608249 systemd[1]: sshd@33-10.0.0.55:22-10.0.0.1:49476.service: Deactivated successfully. Aug 5 22:25:48.615737 systemd[1]: session-34.scope: Deactivated successfully. Aug 5 22:25:48.618458 systemd-logind[1426]: Session 34 logged out. Waiting for processes to exit. Aug 5 22:25:48.635018 systemd-logind[1426]: Removed session 34.