Jul 2 00:20:14.957265 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:20:14.957296 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:14.957311 kernel: BIOS-provided physical RAM map: Jul 2 00:20:14.957320 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:20:14.957329 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:20:14.957338 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:20:14.957348 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 00:20:14.957358 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 00:20:14.957367 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 00:20:14.957378 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:20:14.957388 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 00:20:14.957397 kernel: NX (Execute Disable) protection: active Jul 2 00:20:14.957405 kernel: APIC: Static calls initialized Jul 2 00:20:14.957415 kernel: SMBIOS 2.8 present. Jul 2 00:20:14.957426 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 00:20:14.957439 kernel: Hypervisor detected: KVM Jul 2 00:20:14.957449 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:20:14.957458 kernel: kvm-clock: using sched offset of 2892625239 cycles Jul 2 00:20:14.957469 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:20:14.957479 kernel: tsc: Detected 2794.746 MHz processor Jul 2 00:20:14.957495 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:20:14.957505 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:20:14.957515 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 00:20:14.957526 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:20:14.957539 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:20:14.957549 kernel: Using GB pages for direct mapping Jul 2 00:20:14.957559 kernel: ACPI: Early table checksum verification disabled Jul 2 00:20:14.957569 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 00:20:14.957579 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:20:14.957589 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:20:14.957599 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:20:14.957609 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 00:20:14.957620 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:20:14.957645 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:20:14.957655 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:20:14.957666 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 00:20:14.957676 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 00:20:14.957686 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 00:20:14.957696 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 00:20:14.957706 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 00:20:14.957723 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 00:20:14.957733 kernel: No NUMA configuration found Jul 2 00:20:14.957744 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 00:20:14.957755 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 00:20:14.957765 kernel: Zone ranges: Jul 2 00:20:14.957776 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:20:14.957787 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 00:20:14.957800 kernel: Normal empty Jul 2 00:20:14.957811 kernel: Movable zone start for each node Jul 2 00:20:14.957821 kernel: Early memory node ranges Jul 2 00:20:14.957832 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:20:14.957842 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 00:20:14.957853 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 00:20:14.957864 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:20:14.957874 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:20:14.957885 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 00:20:14.957898 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 00:20:14.957909 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:20:14.957920 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 00:20:14.957930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 00:20:14.957941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:20:14.957951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:20:14.957962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:20:14.957973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:20:14.957983 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:20:14.957997 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:20:14.958010 kernel: TSC deadline timer available Jul 2 00:20:14.958021 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 00:20:14.958032 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:20:14.958043 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 00:20:14.958053 kernel: kvm-guest: setup PV sched yield Jul 2 00:20:14.958064 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 00:20:14.958074 kernel: Booting paravirtualized kernel on KVM Jul 2 00:20:14.958085 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:20:14.958136 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 00:20:14.958148 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jul 2 00:20:14.958159 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jul 2 00:20:14.958169 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 00:20:14.958179 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:20:14.958190 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:20:14.958202 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:14.958214 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:20:14.958228 kernel: random: crng init done Jul 2 00:20:14.958239 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:20:14.958249 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:20:14.958260 kernel: Fallback order for Node 0: 0 Jul 2 00:20:14.958271 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 00:20:14.958281 kernel: Policy zone: DMA32 Jul 2 00:20:14.958292 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:20:14.958303 kernel: Memory: 2428452K/2571756K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 143044K reserved, 0K cma-reserved) Jul 2 00:20:14.958314 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:20:14.958327 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:20:14.958338 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:20:14.958348 kernel: Dynamic Preempt: voluntary Jul 2 00:20:14.958359 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:20:14.958370 kernel: rcu: RCU event tracing is enabled. Jul 2 00:20:14.958381 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:20:14.958392 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:20:14.958403 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:20:14.958413 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:20:14.958424 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:20:14.958438 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:20:14.958448 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 00:20:14.958459 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:20:14.958470 kernel: Console: colour VGA+ 80x25 Jul 2 00:20:14.958480 kernel: printk: console [ttyS0] enabled Jul 2 00:20:14.958490 kernel: ACPI: Core revision 20230628 Jul 2 00:20:14.958501 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 00:20:14.958512 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:20:14.958523 kernel: x2apic enabled Jul 2 00:20:14.958536 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:20:14.958547 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 2 00:20:14.958558 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 2 00:20:14.958572 kernel: kvm-guest: setup PV IPIs Jul 2 00:20:14.958582 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 00:20:14.958593 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 00:20:14.958604 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 2 00:20:14.958615 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 00:20:14.958649 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 00:20:14.958659 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 00:20:14.958670 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:20:14.958684 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:20:14.958695 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:20:14.958706 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:20:14.958717 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 00:20:14.958728 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 00:20:14.958740 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 00:20:14.958754 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 00:20:14.958766 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 2 00:20:14.958778 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 2 00:20:14.958789 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 2 00:20:14.958800 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:20:14.958812 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:20:14.958823 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:20:14.958836 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:20:14.958847 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 00:20:14.958857 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:20:14.958868 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:20:14.958879 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:20:14.958890 kernel: SELinux: Initializing. Jul 2 00:20:14.958902 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:20:14.958913 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:20:14.958925 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 00:20:14.958940 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:14.958951 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:14.958962 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:20:14.958974 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 00:20:14.958985 kernel: ... version: 0 Jul 2 00:20:14.958996 kernel: ... bit width: 48 Jul 2 00:20:14.959008 kernel: ... generic registers: 6 Jul 2 00:20:14.959019 kernel: ... value mask: 0000ffffffffffff Jul 2 00:20:14.959030 kernel: ... max period: 00007fffffffffff Jul 2 00:20:14.959045 kernel: ... fixed-purpose events: 0 Jul 2 00:20:14.959056 kernel: ... event mask: 000000000000003f Jul 2 00:20:14.959067 kernel: signal: max sigframe size: 1776 Jul 2 00:20:14.959078 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:20:14.959090 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:20:14.959115 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:20:14.959127 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:20:14.959138 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 00:20:14.959153 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:20:14.959164 kernel: smpboot: Max logical packages: 1 Jul 2 00:20:14.959179 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 2 00:20:14.959191 kernel: devtmpfs: initialized Jul 2 00:20:14.959202 kernel: x86/mm: Memory block size: 128MB Jul 2 00:20:14.959213 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:20:14.959225 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:20:14.959236 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:20:14.959247 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:20:14.959259 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:20:14.959271 kernel: audit: type=2000 audit(1719879613.089:1): state=initialized audit_enabled=0 res=1 Jul 2 00:20:14.959285 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:20:14.959296 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:20:14.959308 kernel: cpuidle: using governor menu Jul 2 00:20:14.959319 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:20:14.959330 kernel: dca service started, version 1.12.1 Jul 2 00:20:14.959342 kernel: PCI: Using configuration type 1 for base access Jul 2 00:20:14.959353 kernel: PCI: Using configuration type 1 for extended access Jul 2 00:20:14.959364 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:20:14.959376 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:20:14.959390 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:20:14.959402 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:20:14.959413 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:20:14.959424 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:20:14.959435 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:20:14.959447 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:20:14.959459 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:20:14.959470 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:20:14.959481 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:20:14.959495 kernel: ACPI: Interpreter enabled Jul 2 00:20:14.959506 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 00:20:14.959517 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:20:14.959529 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:20:14.959541 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:20:14.959551 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 00:20:14.959562 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:20:14.959834 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:20:14.959856 kernel: acpiphp: Slot [3] registered Jul 2 00:20:14.959866 kernel: acpiphp: Slot [4] registered Jul 2 00:20:14.959877 kernel: acpiphp: Slot [5] registered Jul 2 00:20:14.959887 kernel: acpiphp: Slot [6] registered Jul 2 00:20:14.959897 kernel: acpiphp: Slot [7] registered Jul 2 00:20:14.959907 kernel: acpiphp: Slot [8] registered Jul 2 00:20:14.959917 kernel: acpiphp: Slot [9] registered Jul 2 00:20:14.959928 kernel: acpiphp: Slot [10] registered Jul 2 00:20:14.959938 kernel: acpiphp: Slot [11] registered Jul 2 00:20:14.959952 kernel: acpiphp: Slot [12] registered Jul 2 00:20:14.959962 kernel: acpiphp: Slot [13] registered Jul 2 00:20:14.959972 kernel: acpiphp: Slot [14] registered Jul 2 00:20:14.959982 kernel: acpiphp: Slot [15] registered Jul 2 00:20:14.959993 kernel: acpiphp: Slot [16] registered Jul 2 00:20:14.960003 kernel: acpiphp: Slot [17] registered Jul 2 00:20:14.960014 kernel: acpiphp: Slot [18] registered Jul 2 00:20:14.960024 kernel: acpiphp: Slot [19] registered Jul 2 00:20:14.960034 kernel: acpiphp: Slot [20] registered Jul 2 00:20:14.960045 kernel: acpiphp: Slot [21] registered Jul 2 00:20:14.960058 kernel: acpiphp: Slot [22] registered Jul 2 00:20:14.960068 kernel: acpiphp: Slot [23] registered Jul 2 00:20:14.960079 kernel: acpiphp: Slot [24] registered Jul 2 00:20:14.960089 kernel: acpiphp: Slot [25] registered Jul 2 00:20:14.960099 kernel: acpiphp: Slot [26] registered Jul 2 00:20:14.960125 kernel: acpiphp: Slot [27] registered Jul 2 00:20:14.960136 kernel: acpiphp: Slot [28] registered Jul 2 00:20:14.960146 kernel: acpiphp: Slot [29] registered Jul 2 00:20:14.960157 kernel: acpiphp: Slot [30] registered Jul 2 00:20:14.960171 kernel: acpiphp: Slot [31] registered Jul 2 00:20:14.960181 kernel: PCI host bridge to bus 0000:00 Jul 2 00:20:14.960360 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:20:14.960505 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:20:14.960656 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:20:14.960793 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 00:20:14.960929 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 00:20:14.961068 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:20:14.961281 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:20:14.961455 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:20:14.961644 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 00:20:14.961797 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 00:20:14.961970 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 00:20:14.962161 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 00:20:14.962313 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 00:20:14.962459 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 00:20:14.962718 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 00:20:14.962914 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 00:20:14.963074 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 00:20:14.963275 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 00:20:14.963444 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 00:20:14.963600 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 00:20:14.963772 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 00:20:14.963930 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:20:14.964128 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:20:14.964293 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 00:20:14.964509 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 00:20:14.964733 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 00:20:14.964947 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 00:20:14.965146 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 00:20:14.965316 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 00:20:14.965482 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 00:20:14.965680 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 00:20:14.965845 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 00:20:14.966016 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 00:20:14.966198 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 00:20:14.966360 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 00:20:14.966376 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:20:14.966388 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:20:14.966399 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:20:14.966411 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:20:14.966423 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:20:14.966439 kernel: iommu: Default domain type: Translated Jul 2 00:20:14.966451 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:20:14.966463 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:20:14.966474 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:20:14.966486 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:20:14.966498 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 00:20:14.966663 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 00:20:14.966875 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 00:20:14.967056 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:20:14.967074 kernel: vgaarb: loaded Jul 2 00:20:14.967086 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 00:20:14.967098 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 00:20:14.967124 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:20:14.967136 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:20:14.967148 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:20:14.967160 kernel: pnp: PnP ACPI init Jul 2 00:20:14.967352 kernel: pnp 00:02: [dma 2] Jul 2 00:20:14.967376 kernel: pnp: PnP ACPI: found 6 devices Jul 2 00:20:14.967388 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:20:14.967400 kernel: NET: Registered PF_INET protocol family Jul 2 00:20:14.967412 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:20:14.967423 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:20:14.967435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:20:14.967447 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:20:14.967459 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:20:14.967474 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:20:14.967486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:20:14.967497 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:20:14.967509 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:20:14.967521 kernel: NET: Registered PF_XDP protocol family Jul 2 00:20:14.967688 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:20:14.967835 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:20:14.967979 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:20:14.968168 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 00:20:14.968357 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 00:20:14.968524 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 00:20:14.968706 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:20:14.968723 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:20:14.968735 kernel: Initialise system trusted keyrings Jul 2 00:20:14.968747 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:20:14.968760 kernel: Key type asymmetric registered Jul 2 00:20:14.968771 kernel: Asymmetric key parser 'x509' registered Jul 2 00:20:14.968789 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:20:14.968800 kernel: io scheduler mq-deadline registered Jul 2 00:20:14.968812 kernel: io scheduler kyber registered Jul 2 00:20:14.968823 kernel: io scheduler bfq registered Jul 2 00:20:14.968835 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:20:14.968848 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 00:20:14.968860 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 00:20:14.968872 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 00:20:14.968884 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:20:14.968898 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:20:14.968910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:20:14.968922 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:20:14.968933 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:20:14.969218 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 00:20:14.969238 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:20:14.969389 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 00:20:14.969538 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T00:20:14 UTC (1719879614) Jul 2 00:20:14.969712 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 00:20:14.969729 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 2 00:20:14.969741 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:20:14.969752 kernel: Segment Routing with IPv6 Jul 2 00:20:14.969764 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:20:14.969776 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:20:14.969787 kernel: Key type dns_resolver registered Jul 2 00:20:14.969799 kernel: IPI shorthand broadcast: enabled Jul 2 00:20:14.969810 kernel: sched_clock: Marking stable (1020002909, 123327046)->(1212498925, -69168970) Jul 2 00:20:14.969826 kernel: registered taskstats version 1 Jul 2 00:20:14.969838 kernel: Loading compiled-in X.509 certificates Jul 2 00:20:14.969850 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:20:14.969862 kernel: Key type .fscrypt registered Jul 2 00:20:14.969874 kernel: Key type fscrypt-provisioning registered Jul 2 00:20:14.969886 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:20:14.969898 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:20:14.969909 kernel: ima: No architecture policies found Jul 2 00:20:14.969921 kernel: clk: Disabling unused clocks Jul 2 00:20:14.969936 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:20:14.969947 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:20:14.969959 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:20:14.969971 kernel: Run /init as init process Jul 2 00:20:14.969982 kernel: with arguments: Jul 2 00:20:14.969994 kernel: /init Jul 2 00:20:14.970005 kernel: with environment: Jul 2 00:20:14.970017 kernel: HOME=/ Jul 2 00:20:14.970050 kernel: TERM=linux Jul 2 00:20:14.970068 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:20:14.970083 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:20:14.970099 systemd[1]: Detected virtualization kvm. Jul 2 00:20:14.970125 systemd[1]: Detected architecture x86-64. Jul 2 00:20:14.970136 systemd[1]: Running in initrd. Jul 2 00:20:14.970148 systemd[1]: No hostname configured, using default hostname. Jul 2 00:20:14.970161 systemd[1]: Hostname set to . Jul 2 00:20:14.970179 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:20:14.970191 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:20:14.970204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:20:14.970216 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:20:14.970229 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:20:14.970243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:20:14.970256 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:20:14.970269 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:20:14.970288 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:20:14.970301 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:20:14.970314 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:20:14.970327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:20:14.970340 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:20:14.970353 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:20:14.970365 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:20:14.970381 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:20:14.970394 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:20:14.970408 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:20:14.970421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:20:14.970434 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:20:14.970446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:20:14.970459 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:20:14.970472 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:20:14.970489 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:20:14.970505 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:20:14.970518 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:20:14.970530 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:20:14.970543 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:20:14.970556 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:20:14.970572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:20:14.970585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:14.970597 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:20:14.970610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:20:14.970623 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:20:14.970649 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:20:14.970697 systemd-journald[193]: Collecting audit messages is disabled. Jul 2 00:20:14.970729 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:20:14.970746 systemd-journald[193]: Journal started Jul 2 00:20:14.970773 systemd-journald[193]: Runtime Journal (/run/log/journal/9e5abaf1ce80454fa2d7c23b8f4e162c) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:20:14.965009 systemd-modules-load[194]: Inserted module 'overlay' Jul 2 00:20:15.003822 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:20:15.003865 kernel: Bridge firewalling registered Jul 2 00:20:15.003871 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 2 00:20:15.008871 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:20:15.009481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:20:15.012870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:15.029453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:15.033370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:20:15.036569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:20:15.041393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:20:15.051673 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:20:15.052709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:20:15.059587 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:20:15.071427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:20:15.072236 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:15.075206 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:20:15.094700 dracut-cmdline[229]: dracut-dracut-053 Jul 2 00:20:15.097836 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:20:15.119712 systemd-resolved[227]: Positive Trust Anchors: Jul 2 00:20:15.119737 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:20:15.119775 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:20:15.125132 systemd-resolved[227]: Defaulting to hostname 'linux'. Jul 2 00:20:15.126604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:20:15.130860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:20:15.215163 kernel: SCSI subsystem initialized Jul 2 00:20:15.228154 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:20:15.243152 kernel: iscsi: registered transport (tcp) Jul 2 00:20:15.271160 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:20:15.271235 kernel: QLogic iSCSI HBA Driver Jul 2 00:20:15.324933 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:20:15.343467 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:20:15.373145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:20:15.373223 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:20:15.374796 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:20:15.423145 kernel: raid6: avx2x4 gen() 27353 MB/s Jul 2 00:20:15.440149 kernel: raid6: avx2x2 gen() 29037 MB/s Jul 2 00:20:15.457308 kernel: raid6: avx2x1 gen() 23055 MB/s Jul 2 00:20:15.457387 kernel: raid6: using algorithm avx2x2 gen() 29037 MB/s Jul 2 00:20:15.482148 kernel: raid6: .... xor() 18043 MB/s, rmw enabled Jul 2 00:20:15.482250 kernel: raid6: using avx2x2 recovery algorithm Jul 2 00:20:15.509147 kernel: xor: automatically using best checksumming function avx Jul 2 00:20:15.700140 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:20:15.715663 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:20:15.728440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:20:15.744855 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jul 2 00:20:15.751010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:20:15.758818 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:20:15.773548 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jul 2 00:20:15.808533 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:20:15.816359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:20:15.891084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:20:15.908370 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:20:15.925584 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:20:15.928361 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:20:15.938038 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 00:20:15.952753 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:20:15.952775 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:20:15.952961 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:20:15.952978 kernel: GPT:9289727 != 19775487 Jul 2 00:20:15.952992 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:20:15.953007 kernel: GPT:9289727 != 19775487 Jul 2 00:20:15.953021 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:20:15.953040 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:20:15.929837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:20:15.933200 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:20:15.943405 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:20:15.960016 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:20:15.970404 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:20:15.970556 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:15.976546 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:20:15.976571 kernel: AES CTR mode by8 optimization enabled Jul 2 00:20:15.976584 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:15.979541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:20:15.979734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:15.986969 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:15.995127 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (464) Jul 2 00:20:15.996194 kernel: libata version 3.00 loaded. Jul 2 00:20:15.999465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:16.003723 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 00:20:16.012463 kernel: scsi host0: ata_piix Jul 2 00:20:16.012688 kernel: scsi host1: ata_piix Jul 2 00:20:16.012875 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 00:20:16.012902 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 00:20:16.029147 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Jul 2 00:20:16.036841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:20:16.072350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:16.081952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:20:16.091022 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:20:16.094317 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:20:16.104833 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:20:16.116335 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:20:16.125290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:20:16.153021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:16.167498 kernel: ata2: found unknown device (class 0) Jul 2 00:20:16.167558 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 00:20:16.170150 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 00:20:16.216676 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 00:20:16.229280 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:20:16.229302 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 00:20:16.335959 disk-uuid[540]: Primary Header is updated. Jul 2 00:20:16.335959 disk-uuid[540]: Secondary Entries is updated. Jul 2 00:20:16.335959 disk-uuid[540]: Secondary Header is updated. Jul 2 00:20:16.343127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:20:16.351209 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:20:17.356150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:20:17.356227 disk-uuid[565]: The operation has completed successfully. Jul 2 00:20:17.398008 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:20:17.398215 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:20:17.416350 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:20:17.420932 sh[579]: Success Jul 2 00:20:17.435160 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 00:20:17.474417 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:20:17.494668 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:20:17.498832 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:20:17.514620 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:20:17.514735 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:17.514784 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:20:17.515875 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:20:17.516781 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:20:17.523859 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:20:17.525220 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:20:17.534411 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:20:17.537190 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:20:17.548637 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:17.548695 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:17.548706 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:20:17.552173 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:20:17.564206 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:20:17.566259 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:17.578168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:20:17.588526 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:20:17.712454 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:20:17.729494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:20:17.763144 systemd-networkd[761]: lo: Link UP Jul 2 00:20:17.763160 systemd-networkd[761]: lo: Gained carrier Jul 2 00:20:17.765499 systemd-networkd[761]: Enumeration completed Jul 2 00:20:17.765651 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:20:17.767471 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:17.767479 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:20:17.767874 systemd[1]: Reached target network.target - Network. Jul 2 00:20:17.768853 systemd-networkd[761]: eth0: Link UP Jul 2 00:20:17.768858 systemd-networkd[761]: eth0: Gained carrier Jul 2 00:20:17.768867 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:17.783229 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:20:17.898892 ignition[669]: Ignition 2.18.0 Jul 2 00:20:17.898911 ignition[669]: Stage: fetch-offline Jul 2 00:20:17.898993 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:17.899010 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:20:17.899321 ignition[669]: parsed url from cmdline: "" Jul 2 00:20:17.899327 ignition[669]: no config URL provided Jul 2 00:20:17.899334 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:20:17.899348 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:20:17.900083 ignition[669]: op(1): [started] loading QEMU firmware config module Jul 2 00:20:17.900140 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:20:17.914884 ignition[669]: op(1): [finished] loading QEMU firmware config module Jul 2 00:20:17.954163 ignition[669]: parsing config with SHA512: 1a5d407083e5f1cd61260534bdceabebe04861e523458fa7593863b248f0b0ec37277686024ca50d8a0df02b3780c151c76d2172eddfbef7be3c3c14bad04307 Jul 2 00:20:17.961361 unknown[669]: fetched base config from "system" Jul 2 00:20:17.961377 unknown[669]: fetched user config from "qemu" Jul 2 00:20:17.962661 systemd-resolved[227]: Detected conflict on linux IN A 10.0.0.95 Jul 2 00:20:17.962673 systemd-resolved[227]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jul 2 00:20:17.963639 ignition[669]: fetch-offline: fetch-offline passed Jul 2 00:20:17.963720 ignition[669]: Ignition finished successfully Jul 2 00:20:17.969851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:20:17.976279 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:20:17.981380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:20:18.027226 ignition[774]: Ignition 2.18.0 Jul 2 00:20:18.027240 ignition[774]: Stage: kargs Jul 2 00:20:18.027522 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:18.027539 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:20:18.028520 ignition[774]: kargs: kargs passed Jul 2 00:20:18.028586 ignition[774]: Ignition finished successfully Jul 2 00:20:18.034651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:20:18.047339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:20:18.069545 ignition[782]: Ignition 2.18.0 Jul 2 00:20:18.069581 ignition[782]: Stage: disks Jul 2 00:20:18.069833 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:18.069851 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:20:18.070876 ignition[782]: disks: disks passed Jul 2 00:20:18.070933 ignition[782]: Ignition finished successfully Jul 2 00:20:18.075610 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:20:18.078353 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:20:18.080751 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:20:18.083226 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:20:18.085300 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:20:18.087359 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:20:18.106550 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:20:18.144985 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:20:18.282066 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:20:18.292432 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:20:18.426145 kernel: EXT4-fs (vda9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:20:18.427544 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:20:18.430501 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:20:18.449362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:20:18.453521 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:20:18.457070 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:20:18.459738 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:20:18.459805 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:20:18.463850 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Jul 2 00:20:18.463875 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:18.463887 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:18.464856 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:20:18.470152 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:20:18.472243 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:20:18.474426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:20:18.494459 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:20:18.537302 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:20:18.542198 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:20:18.548890 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:20:18.555855 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:20:18.667568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:20:18.682354 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:20:18.685237 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:20:18.692302 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:20:18.693529 kernel: BTRFS info (device vda6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:18.724302 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:20:18.774741 ignition[918]: INFO : Ignition 2.18.0 Jul 2 00:20:18.774741 ignition[918]: INFO : Stage: mount Jul 2 00:20:18.776908 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:18.776908 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:20:18.776908 ignition[918]: INFO : mount: mount passed Jul 2 00:20:18.776908 ignition[918]: INFO : Ignition finished successfully Jul 2 00:20:18.779508 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:20:18.791263 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:20:18.801510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:20:18.816138 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Jul 2 00:20:18.821016 kernel: BTRFS info (device vda6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:20:18.821045 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:20:18.821056 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:20:18.839141 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:20:18.841481 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:20:18.868325 ignition[945]: INFO : Ignition 2.18.0 Jul 2 00:20:18.868325 ignition[945]: INFO : Stage: files Jul 2 00:20:18.870711 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:18.870711 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:20:18.870711 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:20:18.874991 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:20:18.874991 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:20:18.874991 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:20:18.874991 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:20:18.874991 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:20:18.874838 unknown[945]: wrote ssh authorized keys file for user: core Jul 2 00:20:18.884502 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:20:18.884502 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 00:20:18.902314 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:20:18.981384 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 00:20:18.981384 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:20:18.986394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:20:18.989283 systemd-networkd[761]: eth0: Gained IPv6LL Jul 2 00:20:19.325383 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:20:19.830268 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:20:19.830268 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 00:20:19.837006 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:20:19.869975 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:20:19.893711 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:20:19.895610 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:20:19.895610 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:20:19.895610 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:20:19.895610 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:20:19.895610 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:20:19.895610 ignition[945]: INFO : files: files passed Jul 2 00:20:19.895610 ignition[945]: INFO : Ignition finished successfully Jul 2 00:20:19.898059 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:20:19.914364 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:20:19.916184 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:20:19.918923 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:20:19.919042 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:20:19.931805 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:20:19.935239 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:20:19.935239 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:20:19.939419 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:20:19.941975 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:20:19.943051 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:20:19.957500 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:20:19.994199 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:20:19.994354 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:20:19.995473 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:20:19.998461 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:20:19.998833 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:20:20.000160 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:20:20.024042 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:20:20.032556 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:20:20.044669 systemd[1]: Stopped target network.target - Network. Jul 2 00:20:20.045943 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:20:20.048090 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:20:20.050711 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:20:20.052963 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:20:20.053127 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:20:20.055917 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:20:20.058213 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:20:20.060634 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:20:20.063176 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:20:20.065339 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:20:20.067717 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:20:20.069985 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:20:20.074725 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:20:20.076747 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:20:20.078951 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:20:20.083507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:20:20.083695 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:20:20.086529 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:20:20.089688 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:20:20.091943 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:20:20.092137 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:20:20.094406 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:20:20.094624 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:20:20.096905 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:20:20.097068 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:20:20.099271 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:20:20.102821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:20:20.106248 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:20:20.108662 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:20:20.111186 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:20:20.113825 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:20:20.114003 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:20:20.116740 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:20:20.116911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:20:20.119046 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:20:20.119261 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:20:20.121408 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:20:20.121580 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:20:20.142503 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:20:20.145908 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:20:20.146748 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:20:20.147678 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:20:20.148067 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:20:20.149290 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:20:20.150176 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:20:20.150409 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:20:20.158240 systemd-networkd[761]: eth0: DHCPv6 lease lost Jul 2 00:20:20.160410 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:20:20.165160 ignition[1000]: INFO : Ignition 2.18.0 Jul 2 00:20:20.165160 ignition[1000]: INFO : Stage: umount Jul 2 00:20:20.165160 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:20:20.165160 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:20:20.165160 ignition[1000]: INFO : umount: umount passed Jul 2 00:20:20.160688 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:20:20.173412 ignition[1000]: INFO : Ignition finished successfully Jul 2 00:20:20.167239 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:20:20.167436 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:20:20.170186 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:20:20.170346 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:20:20.175183 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:20:20.175332 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:20:20.180239 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:20:20.181785 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:20:20.181840 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:20:20.184508 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:20:20.184604 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:20:20.186809 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:20:20.186887 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:20:20.189195 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:20:20.189268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:20:20.191740 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:20:20.191812 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:20:20.206351 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:20:20.206914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:20:20.207003 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:20:20.210386 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:20:20.210457 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:20:20.212889 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:20:20.212963 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:20:20.215573 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:20:20.215648 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:20:20.216525 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:20:20.232716 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:20:20.232889 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:20:20.244225 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:20:20.244477 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:20:20.247498 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:20:20.247600 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:20:20.249742 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:20:20.249812 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:20:20.251926 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:20:20.252020 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:20:20.254898 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:20:20.254980 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:20:20.257040 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:20:20.257163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:20:20.313477 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:20:20.316336 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:20:20.316446 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:20:20.319501 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:20:20.319604 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:20:20.325939 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:20:20.326061 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:20:20.346356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:20:20.346479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:20.351608 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:20:20.352821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:20:20.727789 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:20:20.729068 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:20:20.732221 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:20:20.750202 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:20:20.750331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:20:20.764312 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:20:20.806162 systemd[1]: Switching root. Jul 2 00:20:20.837030 systemd-journald[193]: Journal stopped Jul 2 00:20:22.836592 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 2 00:20:22.836670 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:20:22.836690 kernel: SELinux: policy capability open_perms=1 Jul 2 00:20:22.836707 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:20:22.836727 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:20:22.836744 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:20:22.836765 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:20:22.836790 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:20:22.836806 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:20:22.836821 kernel: audit: type=1403 audit(1719879621.757:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:20:22.836846 systemd[1]: Successfully loaded SELinux policy in 58.274ms. Jul 2 00:20:22.836881 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.600ms. Jul 2 00:20:22.836908 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:20:22.836926 systemd[1]: Detected virtualization kvm. Jul 2 00:20:22.836943 systemd[1]: Detected architecture x86-64. Jul 2 00:20:22.836965 systemd[1]: Detected first boot. Jul 2 00:20:22.836983 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:20:22.837001 zram_generator::config[1043]: No configuration found. Jul 2 00:20:22.837019 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:20:22.837036 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:20:22.837054 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:20:22.837071 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:20:22.837090 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:20:22.837158 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:20:22.837179 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:20:22.837196 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:20:22.837214 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:20:22.837231 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:20:22.837249 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:20:22.837265 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:20:22.837282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:20:22.837299 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:20:22.837321 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:20:22.837339 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:20:22.837357 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:20:22.837374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:20:22.837390 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:20:22.837407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:20:22.837424 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:20:22.837440 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:20:22.837481 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:20:22.837503 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:20:22.837521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:20:22.837539 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:20:22.837556 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:20:22.837573 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:20:22.837590 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:20:22.837607 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:20:22.837628 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:20:22.837644 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:20:22.837661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:20:22.837678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:20:22.837695 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:20:22.837717 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:20:22.837735 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:20:22.837752 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:22.837769 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:20:22.837791 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:20:22.837808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:20:22.837825 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:20:22.837843 systemd[1]: Reached target machines.target - Containers. Jul 2 00:20:22.837859 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:20:22.837877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:20:22.837894 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:20:22.837912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:20:22.837929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:20:22.837950 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:20:22.837968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:20:22.837985 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:20:22.838002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:20:22.838019 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:20:22.838036 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:20:22.838054 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:20:22.838071 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:20:22.838095 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:20:22.838142 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:20:22.838161 kernel: fuse: init (API version 7.39) Jul 2 00:20:22.838177 kernel: loop: module loaded Jul 2 00:20:22.838193 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:20:22.838210 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:20:22.838228 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:20:22.838245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:20:22.838262 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:20:22.838283 systemd[1]: Stopped verity-setup.service. Jul 2 00:20:22.838301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:22.838318 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:20:22.838335 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:20:22.838351 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:20:22.838368 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:20:22.838390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:20:22.838406 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:20:22.838422 kernel: ACPI: bus type drm_connector registered Jul 2 00:20:22.838438 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:20:22.838467 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:20:22.838484 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:20:22.838503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:20:22.838519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:20:22.838539 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:20:22.838555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:20:22.838571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:20:22.838587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:20:22.838602 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:20:22.838618 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:20:22.838635 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:20:22.838655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:20:22.838702 systemd-journald[1116]: Collecting audit messages is disabled. Jul 2 00:20:22.838732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:20:22.838749 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:20:22.838766 systemd-journald[1116]: Journal started Jul 2 00:20:22.838800 systemd-journald[1116]: Runtime Journal (/run/log/journal/9e5abaf1ce80454fa2d7c23b8f4e162c) is 6.0M, max 48.4M, 42.3M free. Jul 2 00:20:22.486354 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:20:22.514790 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:20:22.515799 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:20:22.843994 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:20:22.845349 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:20:22.872362 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:20:22.881255 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:20:22.909099 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:20:22.910851 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:20:22.910915 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:20:22.913627 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:20:22.917360 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:20:22.923910 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:20:22.925335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:20:22.929153 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:20:22.932990 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:20:22.934789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:20:22.938861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:20:22.940602 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:20:22.943707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:20:22.949770 systemd-journald[1116]: Time spent on flushing to /var/log/journal/9e5abaf1ce80454fa2d7c23b8f4e162c is 13.634ms for 946 entries. Jul 2 00:20:22.949770 systemd-journald[1116]: System Journal (/var/log/journal/9e5abaf1ce80454fa2d7c23b8f4e162c) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:20:23.691066 systemd-journald[1116]: Received client request to flush runtime journal. Jul 2 00:20:23.691167 kernel: loop0: detected capacity change from 0 to 80568 Jul 2 00:20:23.691199 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:20:23.691342 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:20:23.691382 kernel: loop1: detected capacity change from 0 to 139904 Jul 2 00:20:23.691416 kernel: loop2: detected capacity change from 0 to 209816 Jul 2 00:20:23.691460 kernel: loop3: detected capacity change from 0 to 80568 Jul 2 00:20:23.691488 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:20:23.691508 kernel: loop5: detected capacity change from 0 to 209816 Jul 2 00:20:22.955497 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:20:22.958677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:20:22.962463 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:20:22.982773 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:20:22.984931 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:20:22.989734 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:20:22.993945 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:20:23.011392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:20:23.073941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:20:23.082811 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jul 2 00:20:23.082826 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jul 2 00:20:23.095144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:20:23.110454 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:20:23.159411 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:20:23.336549 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:20:23.382540 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:20:23.396453 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:20:23.439793 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:20:23.446267 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 2 00:20:23.446284 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 2 00:20:23.455505 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:20:23.457609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:20:23.693735 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:20:23.789742 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:20:23.790377 (sd-merge)[1177]: Merged extensions into '/usr'. Jul 2 00:20:23.794537 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:20:23.794555 systemd[1]: Reloading... Jul 2 00:20:23.887244 zram_generator::config[1205]: No configuration found. Jul 2 00:20:24.043698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:20:24.094557 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:20:24.114046 systemd[1]: Reloading finished in 318 ms. Jul 2 00:20:24.166750 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:20:24.168949 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:20:24.170938 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:20:24.187407 systemd[1]: Starting ensure-sysext.service... Jul 2 00:20:24.193155 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:20:24.203380 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:20:24.203410 systemd[1]: Reloading... Jul 2 00:20:24.233528 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:20:24.234570 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:20:24.235951 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:20:24.236495 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 2 00:20:24.236684 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 2 00:20:24.241249 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:20:24.243304 systemd-tmpfiles[1247]: Skipping /boot Jul 2 00:20:24.287676 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:20:24.287842 systemd-tmpfiles[1247]: Skipping /boot Jul 2 00:20:24.288165 zram_generator::config[1270]: No configuration found. Jul 2 00:20:24.439481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:20:24.496916 systemd[1]: Reloading finished in 293 ms. Jul 2 00:20:24.518462 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:20:24.534767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:20:24.558686 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:20:24.562387 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:20:24.565478 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:20:24.587494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:20:24.590707 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:20:24.594872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:24.595229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:20:24.597623 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:20:24.601202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:20:24.608543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:20:24.611403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:20:24.611569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:24.622243 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:20:24.624872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:20:24.625212 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:20:24.628245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:20:24.628701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:20:24.631197 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:20:24.637142 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:20:24.637438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:20:24.642549 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:20:24.666771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:24.666976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:20:24.673559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:20:24.697537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:20:24.702445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:20:24.705040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:20:24.705289 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:24.706729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:20:24.708336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:20:24.748230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:20:24.748478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:20:24.753148 augenrules[1343]: No rules Jul 2 00:20:24.753297 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:20:24.753544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:20:24.755706 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:20:24.765695 systemd[1]: Finished ensure-sysext.service. Jul 2 00:20:24.800073 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:20:24.804674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:24.804825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:20:24.814616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:20:24.821422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:20:24.824257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:20:24.873159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:20:24.877412 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:20:24.881212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:20:24.885245 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:20:24.887476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:20:24.888025 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:20:24.890206 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:20:24.892212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:20:24.892473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:20:24.894896 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:20:24.895215 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:20:24.897589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:20:24.897822 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:20:24.905934 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:20:24.906038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:20:24.906069 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:20:24.914489 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:20:24.929964 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Jul 2 00:20:24.948466 systemd-resolved[1317]: Positive Trust Anchors: Jul 2 00:20:24.948485 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:20:24.948519 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:20:24.952650 systemd-resolved[1317]: Defaulting to hostname 'linux'. Jul 2 00:20:24.953129 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:20:24.960525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:20:24.964563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:20:24.978421 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:20:24.995205 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:20:25.000552 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:20:25.026831 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:20:25.060145 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1386) Jul 2 00:20:25.062127 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1380) Jul 2 00:20:25.110580 systemd-networkd[1385]: lo: Link UP Jul 2 00:20:25.110592 systemd-networkd[1385]: lo: Gained carrier Jul 2 00:20:25.118299 systemd-networkd[1385]: Enumeration completed Jul 2 00:20:25.123792 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:20:25.130725 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:25.130735 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:20:25.133007 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:25.133044 systemd-networkd[1385]: eth0: Link UP Jul 2 00:20:25.133048 systemd-networkd[1385]: eth0: Gained carrier Jul 2 00:20:25.133058 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:20:25.133099 systemd[1]: Reached target network.target - Network. Jul 2 00:20:25.205151 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 00:20:25.205523 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:20:25.205246 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:20:25.206376 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. Jul 2 00:20:25.207316 systemd-timesyncd[1362]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:20:25.207359 systemd-timesyncd[1362]: Initial clock synchronization to Tue 2024-07-02 00:20:24.824117 UTC. Jul 2 00:20:25.210823 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:20:25.212840 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:20:25.220148 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:20:25.222886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:20:25.235140 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:20:25.236503 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:20:25.282654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:20:25.352677 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:20:25.429184 kernel: kvm_amd: TSC scaling supported Jul 2 00:20:25.429261 kernel: kvm_amd: Nested Virtualization enabled Jul 2 00:20:25.429281 kernel: kvm_amd: Nested Paging enabled Jul 2 00:20:25.430174 kernel: kvm_amd: LBR virtualization supported Jul 2 00:20:25.430204 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 2 00:20:25.431192 kernel: kvm_amd: Virtual GIF supported Jul 2 00:20:25.451209 kernel: EDAC MC: Ver: 3.0.0 Jul 2 00:20:25.491716 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:20:25.554858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:20:25.569373 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:20:25.578478 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:20:25.608588 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:20:25.649616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:20:25.650782 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:20:25.652140 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:20:25.653708 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:20:25.655478 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:20:25.656910 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:20:25.658239 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:20:25.659491 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:20:25.659531 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:20:25.660467 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:20:25.662418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:20:25.665369 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:20:25.678702 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:20:25.774629 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:20:25.776750 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:20:25.778254 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:20:25.779515 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:20:25.780664 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:20:25.780700 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:20:25.793200 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:20:25.869464 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:20:25.871529 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:20:25.873877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:20:25.876419 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:20:25.877685 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:20:25.878953 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:20:25.884345 jq[1422]: false Jul 2 00:20:25.885304 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:20:25.991413 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:20:25.995889 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:20:26.005042 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:20:26.013138 extend-filesystems[1423]: Found loop3 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found loop4 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found loop5 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found sr0 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda1 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda2 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda3 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found usr Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda4 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda6 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda7 Jul 2 00:20:26.013138 extend-filesystems[1423]: Found vda9 Jul 2 00:20:26.013138 extend-filesystems[1423]: Checking size of /dev/vda9 Jul 2 00:20:26.072903 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:20:26.087583 dbus-daemon[1421]: [system] SELinux support is enabled Jul 2 00:20:26.169908 extend-filesystems[1423]: Resized partition /dev/vda9 Jul 2 00:20:26.238283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1372) Jul 2 00:20:26.073621 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:20:26.238535 extend-filesystems[1445]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:20:26.244189 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:20:26.244307 jq[1440]: true Jul 2 00:20:26.074851 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:20:26.080766 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:20:26.087479 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:20:26.155247 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:20:26.165127 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:20:26.165482 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:20:26.165919 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:20:26.166272 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:20:26.172329 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:20:26.172547 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:20:26.250062 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:20:26.253310 jq[1448]: true Jul 2 00:20:26.263382 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:20:26.263446 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:20:26.311216 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:20:26.311253 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:20:26.313117 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:20:26.323489 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:20:26.334444 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:20:26.334741 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:20:26.380785 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:20:26.384918 tar[1446]: linux-amd64/helm Jul 2 00:20:26.390397 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:20:26.392249 update_engine[1439]: I0702 00:20:26.392158 1439 main.cc:92] Flatcar Update Engine starting Jul 2 00:20:26.395703 update_engine[1439]: I0702 00:20:26.395637 1439 update_check_scheduler.cc:74] Next update check in 4m29s Jul 2 00:20:26.399460 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:20:26.429173 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:20:26.432621 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:20:26.435536 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:20:26.463767 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:20:26.465034 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:20:26.478282 systemd-networkd[1385]: eth0: Gained IPv6LL Jul 2 00:20:26.504875 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:20:26.517432 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:20:26.539366 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:20:26.542137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:26.617651 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:20:26.632449 systemd-logind[1431]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:20:26.632758 systemd-logind[1431]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:20:26.636026 systemd-logind[1431]: New seat seat0. Jul 2 00:20:26.644140 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:20:26.669250 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:20:26.669575 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:20:26.672786 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:20:26.742879 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:20:26.770571 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:20:27.011355 tar[1446]: linux-amd64/LICENSE Jul 2 00:20:27.011511 tar[1446]: linux-amd64/README.md Jul 2 00:20:27.028476 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:20:27.112145 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:20:27.596204 containerd[1449]: time="2024-07-02T00:20:27.596036507Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:20:27.596569 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:20:27.596569 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:20:27.596569 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:20:27.600993 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Jul 2 00:20:27.603683 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:20:27.604084 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:20:27.624981 containerd[1449]: time="2024-07-02T00:20:27.624872692Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:20:27.624981 containerd[1449]: time="2024-07-02T00:20:27.624939654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:20:27.626520 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.626994828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.627022646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.627309089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.627327289Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.627445136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.627523454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.627538956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:20:27.627791 containerd[1449]: time="2024-07-02T00:20:27.627655075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:20:27.628192 containerd[1449]: time="2024-07-02T00:20:27.627980769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:20:27.628192 containerd[1449]: time="2024-07-02T00:20:27.628005237Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:20:27.628192 containerd[1449]: time="2024-07-02T00:20:27.628018388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:20:27.628322 containerd[1449]: time="2024-07-02T00:20:27.628201834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:20:27.628322 containerd[1449]: time="2024-07-02T00:20:27.628223393Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:20:27.628322 containerd[1449]: time="2024-07-02T00:20:27.628305014Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:20:27.628322 containerd[1449]: time="2024-07-02T00:20:27.628321438Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:20:27.628560 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:20:27.633228 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:20:27.639007 containerd[1449]: time="2024-07-02T00:20:27.638929622Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:20:27.639007 containerd[1449]: time="2024-07-02T00:20:27.638997573Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:20:27.639007 containerd[1449]: time="2024-07-02T00:20:27.639015821Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:20:27.639239 containerd[1449]: time="2024-07-02T00:20:27.639070468Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:20:27.639239 containerd[1449]: time="2024-07-02T00:20:27.639117849Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:20:27.639239 containerd[1449]: time="2024-07-02T00:20:27.639134158Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:20:27.639239 containerd[1449]: time="2024-07-02T00:20:27.639150467Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:20:27.639420 containerd[1449]: time="2024-07-02T00:20:27.639376859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:20:27.639420 containerd[1449]: time="2024-07-02T00:20:27.639414813Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:20:27.639522 containerd[1449]: time="2024-07-02T00:20:27.639434779Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:20:27.639522 containerd[1449]: time="2024-07-02T00:20:27.639451731Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:20:27.639522 containerd[1449]: time="2024-07-02T00:20:27.639473367Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.639522 containerd[1449]: time="2024-07-02T00:20:27.639492047Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.639522 containerd[1449]: time="2024-07-02T00:20:27.639506330Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.639522 containerd[1449]: time="2024-07-02T00:20:27.639522600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.639648 containerd[1449]: time="2024-07-02T00:20:27.639539197Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.639648 containerd[1449]: time="2024-07-02T00:20:27.639553346Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.639648 containerd[1449]: time="2024-07-02T00:20:27.639565987Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.639648 containerd[1449]: time="2024-07-02T00:20:27.639578534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:20:27.639737 containerd[1449]: time="2024-07-02T00:20:27.639724448Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:20:27.640274 containerd[1449]: time="2024-07-02T00:20:27.640198398Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:20:27.640274 containerd[1449]: time="2024-07-02T00:20:27.640271543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640354 containerd[1449]: time="2024-07-02T00:20:27.640292296Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:20:27.640354 containerd[1449]: time="2024-07-02T00:20:27.640326084Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:20:27.640436 containerd[1449]: time="2024-07-02T00:20:27.640417745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640474 containerd[1449]: time="2024-07-02T00:20:27.640438056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640474 containerd[1449]: time="2024-07-02T00:20:27.640455853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640474 containerd[1449]: time="2024-07-02T00:20:27.640470856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640540 containerd[1449]: time="2024-07-02T00:20:27.640487280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640540 containerd[1449]: time="2024-07-02T00:20:27.640505211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640540 containerd[1449]: time="2024-07-02T00:20:27.640521165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640540 containerd[1449]: time="2024-07-02T00:20:27.640536600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640611 containerd[1449]: time="2024-07-02T00:20:27.640554396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:20:27.640844 containerd[1449]: time="2024-07-02T00:20:27.640806936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640844 containerd[1449]: time="2024-07-02T00:20:27.640835407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640916 containerd[1449]: time="2024-07-02T00:20:27.640853540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640916 containerd[1449]: time="2024-07-02T00:20:27.640870088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640916 containerd[1449]: time="2024-07-02T00:20:27.640886944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640994 containerd[1449]: time="2024-07-02T00:20:27.640921443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640994 containerd[1449]: time="2024-07-02T00:20:27.640937732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.640994 containerd[1449]: time="2024-07-02T00:20:27.640954396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:20:27.641363 containerd[1449]: time="2024-07-02T00:20:27.641293327Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:20:27.641515 containerd[1449]: time="2024-07-02T00:20:27.641367632Z" level=info msg="Connect containerd service" Jul 2 00:20:27.641515 containerd[1449]: time="2024-07-02T00:20:27.641400288Z" level=info msg="using legacy CRI server" Jul 2 00:20:27.641515 containerd[1449]: time="2024-07-02T00:20:27.641409042Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:20:27.641577 containerd[1449]: time="2024-07-02T00:20:27.641521534Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:20:27.642334 containerd[1449]: time="2024-07-02T00:20:27.642293475Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:20:27.642399 containerd[1449]: time="2024-07-02T00:20:27.642350858Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:20:27.642399 containerd[1449]: time="2024-07-02T00:20:27.642371265Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:20:27.642399 containerd[1449]: time="2024-07-02T00:20:27.642385490Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:20:27.642479 containerd[1449]: time="2024-07-02T00:20:27.642402961Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:20:27.642662 containerd[1449]: time="2024-07-02T00:20:27.642582750Z" level=info msg="Start subscribing containerd event" Jul 2 00:20:27.642709 containerd[1449]: time="2024-07-02T00:20:27.642667135Z" level=info msg="Start recovering state" Jul 2 00:20:27.642808 containerd[1449]: time="2024-07-02T00:20:27.642761733Z" level=info msg="Start event monitor" Jul 2 00:20:27.642909 containerd[1449]: time="2024-07-02T00:20:27.642766360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:20:27.643756 containerd[1449]: time="2024-07-02T00:20:27.642953338Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:20:27.643756 containerd[1449]: time="2024-07-02T00:20:27.643474362Z" level=info msg="Start snapshots syncer" Jul 2 00:20:27.643756 containerd[1449]: time="2024-07-02T00:20:27.643498149Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:20:27.643756 containerd[1449]: time="2024-07-02T00:20:27.643513891Z" level=info msg="Start streaming server" Jul 2 00:20:27.643756 containerd[1449]: time="2024-07-02T00:20:27.643595146Z" level=info msg="containerd successfully booted in 0.158823s" Jul 2 00:20:27.644180 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:20:28.315697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:28.322831 (kubelet)[1534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:20:28.326650 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:20:28.331895 systemd[1]: Startup finished in 1.180s (kernel) + 7.013s (initrd) + 6.630s (userspace) = 14.823s. Jul 2 00:20:28.871649 kubelet[1534]: E0702 00:20:28.871541 1534 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:20:28.876481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:20:28.876685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:20:28.877043 systemd[1]: kubelet.service: Consumed 1.219s CPU time. Jul 2 00:20:35.155936 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:20:35.157312 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:43410.service - OpenSSH per-connection server daemon (10.0.0.1:43410). Jul 2 00:20:35.199281 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 43410 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:20:35.201778 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:35.211531 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:20:35.226529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:20:35.228514 systemd-logind[1431]: New session 1 of user core. Jul 2 00:20:35.241857 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:20:35.252405 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:20:35.255646 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:35.372048 systemd[1551]: Queued start job for default target default.target. Jul 2 00:20:35.383637 systemd[1551]: Created slice app.slice - User Application Slice. Jul 2 00:20:35.383665 systemd[1551]: Reached target paths.target - Paths. Jul 2 00:20:35.383681 systemd[1551]: Reached target timers.target - Timers. Jul 2 00:20:35.385648 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:20:35.399190 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:20:35.399334 systemd[1551]: Reached target sockets.target - Sockets. Jul 2 00:20:35.399352 systemd[1551]: Reached target basic.target - Basic System. Jul 2 00:20:35.399391 systemd[1551]: Reached target default.target - Main User Target. Jul 2 00:20:35.399431 systemd[1551]: Startup finished in 135ms. Jul 2 00:20:35.400081 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:20:35.401816 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:20:35.464054 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:43418.service - OpenSSH per-connection server daemon (10.0.0.1:43418). Jul 2 00:20:35.498715 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 43418 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:20:35.500259 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:35.504634 systemd-logind[1431]: New session 2 of user core. Jul 2 00:20:35.516234 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:20:35.571533 sshd[1562]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:35.581761 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:43418.service: Deactivated successfully. Jul 2 00:20:35.583565 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:20:35.585486 systemd-logind[1431]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:20:35.592523 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:43424.service - OpenSSH per-connection server daemon (10.0.0.1:43424). Jul 2 00:20:35.593623 systemd-logind[1431]: Removed session 2. Jul 2 00:20:35.619611 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 43424 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:20:35.621143 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:35.627668 systemd-logind[1431]: New session 3 of user core. Jul 2 00:20:35.642446 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:20:35.692552 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:35.707928 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:43424.service: Deactivated successfully. Jul 2 00:20:35.709908 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:20:35.711646 systemd-logind[1431]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:20:35.723389 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:43428.service - OpenSSH per-connection server daemon (10.0.0.1:43428). Jul 2 00:20:35.724411 systemd-logind[1431]: Removed session 3. Jul 2 00:20:35.751380 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 43428 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:20:35.753122 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:35.757493 systemd-logind[1431]: New session 4 of user core. Jul 2 00:20:35.767289 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:20:35.825228 sshd[1577]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:35.837131 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:43428.service: Deactivated successfully. Jul 2 00:20:35.838945 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:20:35.840802 systemd-logind[1431]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:20:35.842242 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:43442.service - OpenSSH per-connection server daemon (10.0.0.1:43442). Jul 2 00:20:35.843187 systemd-logind[1431]: Removed session 4. Jul 2 00:20:35.874183 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 43442 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:20:35.876078 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:35.880992 systemd-logind[1431]: New session 5 of user core. Jul 2 00:20:35.888390 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:20:35.951217 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:20:35.951560 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:20:35.970761 sudo[1588]: pam_unix(sudo:session): session closed for user root Jul 2 00:20:35.973252 sshd[1585]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:35.986830 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:43442.service: Deactivated successfully. Jul 2 00:20:35.988995 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:20:35.991420 systemd-logind[1431]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:20:36.003598 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:43446.service - OpenSSH per-connection server daemon (10.0.0.1:43446). Jul 2 00:20:36.004824 systemd-logind[1431]: Removed session 5. Jul 2 00:20:36.033376 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 43446 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:20:36.035366 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:36.040304 systemd-logind[1431]: New session 6 of user core. Jul 2 00:20:36.051314 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:20:36.107823 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:20:36.108192 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:20:36.112538 sudo[1597]: pam_unix(sudo:session): session closed for user root Jul 2 00:20:36.119701 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:20:36.120098 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:20:36.144578 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:20:36.147259 auditctl[1600]: No rules Jul 2 00:20:36.149048 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:20:36.149471 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:20:36.151799 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:20:36.193854 augenrules[1618]: No rules Jul 2 00:20:36.196151 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:20:36.197541 sudo[1596]: pam_unix(sudo:session): session closed for user root Jul 2 00:20:36.199619 sshd[1593]: pam_unix(sshd:session): session closed for user core Jul 2 00:20:36.216089 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:43446.service: Deactivated successfully. Jul 2 00:20:36.217822 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:20:36.219383 systemd-logind[1431]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:20:36.228520 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:43460.service - OpenSSH per-connection server daemon (10.0.0.1:43460). Jul 2 00:20:36.229716 systemd-logind[1431]: Removed session 6. Jul 2 00:20:36.256201 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 43460 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:20:36.258698 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:20:36.263563 systemd-logind[1431]: New session 7 of user core. Jul 2 00:20:36.279430 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:20:36.333798 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:20:36.334226 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:20:36.450648 (dockerd)[1639]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:20:36.450718 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:20:36.757665 dockerd[1639]: time="2024-07-02T00:20:36.757597306Z" level=info msg="Starting up" Jul 2 00:20:38.436775 dockerd[1639]: time="2024-07-02T00:20:38.436699492Z" level=info msg="Loading containers: start." Jul 2 00:20:39.126950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:20:39.141376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:39.204695 kernel: Initializing XFRM netlink socket Jul 2 00:20:39.303037 systemd-networkd[1385]: docker0: Link UP Jul 2 00:20:39.313323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:39.320378 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:20:39.390963 kubelet[1735]: E0702 00:20:39.390826 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:20:39.400712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:20:39.400982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:20:41.193642 dockerd[1639]: time="2024-07-02T00:20:41.193577362Z" level=info msg="Loading containers: done." Jul 2 00:20:41.246351 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1846525126-merged.mount: Deactivated successfully. Jul 2 00:20:41.697822 dockerd[1639]: time="2024-07-02T00:20:41.697718145Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:20:41.698038 dockerd[1639]: time="2024-07-02T00:20:41.697956903Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:20:41.698153 dockerd[1639]: time="2024-07-02T00:20:41.698117840Z" level=info msg="Daemon has completed initialization" Jul 2 00:20:42.285778 dockerd[1639]: time="2024-07-02T00:20:42.285694120Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:20:42.286025 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:20:43.004347 containerd[1449]: time="2024-07-02T00:20:43.004290080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:20:46.110604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2342742139.mount: Deactivated successfully. Jul 2 00:20:49.651281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:20:49.664311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:20:49.829429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:20:49.835407 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:20:49.961958 kubelet[1815]: E0702 00:20:49.961729 1815 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:20:49.967363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:20:49.967613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:21:00.183866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:21:00.201415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:00.349636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:00.355701 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:21:01.037761 kubelet[1862]: E0702 00:21:01.037655 1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:21:01.042809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:21:01.043034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:21:04.734819 containerd[1449]: time="2024-07-02T00:21:04.732617073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:04.786538 containerd[1449]: time="2024-07-02T00:21:04.786390864Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jul 2 00:21:04.836569 containerd[1449]: time="2024-07-02T00:21:04.836414687Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:04.858357 containerd[1449]: time="2024-07-02T00:21:04.858180152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:04.859965 containerd[1449]: time="2024-07-02T00:21:04.859875600Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 21.855528699s" Jul 2 00:21:04.859965 containerd[1449]: time="2024-07-02T00:21:04.859946293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 00:21:04.903959 containerd[1449]: time="2024-07-02T00:21:04.903896610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:21:09.671494 containerd[1449]: time="2024-07-02T00:21:09.671404985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:09.672941 containerd[1449]: time="2024-07-02T00:21:09.672858342Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jul 2 00:21:09.677292 containerd[1449]: time="2024-07-02T00:21:09.677219645Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:09.682064 containerd[1449]: time="2024-07-02T00:21:09.681980436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:09.683263 containerd[1449]: time="2024-07-02T00:21:09.683199551Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 4.779256963s" Jul 2 00:21:09.683263 containerd[1449]: time="2024-07-02T00:21:09.683241790Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 00:21:09.719799 containerd[1449]: time="2024-07-02T00:21:09.719739364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:21:11.168825 containerd[1449]: time="2024-07-02T00:21:11.168724981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:11.170219 containerd[1449]: time="2024-07-02T00:21:11.170084412Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jul 2 00:21:11.173772 containerd[1449]: time="2024-07-02T00:21:11.173721170Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:11.180231 containerd[1449]: time="2024-07-02T00:21:11.180120829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:11.181683 containerd[1449]: time="2024-07-02T00:21:11.181627185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.461840223s" Jul 2 00:21:11.181683 containerd[1449]: time="2024-07-02T00:21:11.181675255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 00:21:11.184272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:21:11.194471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:11.214875 containerd[1449]: time="2024-07-02T00:21:11.214544062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:21:11.361379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:11.367720 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:21:11.466801 kubelet[1922]: E0702 00:21:11.466706 1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:21:11.473417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:21:11.473736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:21:11.508285 update_engine[1439]: I0702 00:21:11.508211 1439 update_attempter.cc:509] Updating boot flags... Jul 2 00:21:12.142141 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1939) Jul 2 00:21:12.185179 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1942) Jul 2 00:21:13.897861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568900029.mount: Deactivated successfully. Jul 2 00:21:14.927449 containerd[1449]: time="2024-07-02T00:21:14.927345311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:14.928340 containerd[1449]: time="2024-07-02T00:21:14.928279143Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 00:21:14.929739 containerd[1449]: time="2024-07-02T00:21:14.929701176Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:14.932320 containerd[1449]: time="2024-07-02T00:21:14.932266209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:14.932836 containerd[1449]: time="2024-07-02T00:21:14.932772806Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 3.718143747s" Jul 2 00:21:14.932836 containerd[1449]: time="2024-07-02T00:21:14.932814001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:21:14.961061 containerd[1449]: time="2024-07-02T00:21:14.961003843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:21:15.712625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977056751.mount: Deactivated successfully. Jul 2 00:21:15.722435 containerd[1449]: time="2024-07-02T00:21:15.722355648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:15.723164 containerd[1449]: time="2024-07-02T00:21:15.723080997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 00:21:15.724722 containerd[1449]: time="2024-07-02T00:21:15.724671523Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:15.727612 containerd[1449]: time="2024-07-02T00:21:15.727523020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:15.728377 containerd[1449]: time="2024-07-02T00:21:15.728312109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 767.261289ms" Jul 2 00:21:15.728377 containerd[1449]: time="2024-07-02T00:21:15.728369246Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 00:21:15.772134 containerd[1449]: time="2024-07-02T00:21:15.772023451Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:21:16.330863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435632141.mount: Deactivated successfully. Jul 2 00:21:21.683844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 00:21:21.695414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:21.865346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:21.872419 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:21:21.956138 kubelet[2023]: E0702 00:21:21.955925 2023 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:21:21.960198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:21:21.960405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:21:22.428820 containerd[1449]: time="2024-07-02T00:21:22.428592947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:22.430212 containerd[1449]: time="2024-07-02T00:21:22.430091489Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 00:21:22.431956 containerd[1449]: time="2024-07-02T00:21:22.431907366Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:22.435438 containerd[1449]: time="2024-07-02T00:21:22.435363155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:22.436948 containerd[1449]: time="2024-07-02T00:21:22.436888510Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.664812013s" Jul 2 00:21:22.436948 containerd[1449]: time="2024-07-02T00:21:22.436941867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 00:21:22.467960 containerd[1449]: time="2024-07-02T00:21:22.467905822Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:21:23.208399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819149722.mount: Deactivated successfully. Jul 2 00:21:23.596062 containerd[1449]: time="2024-07-02T00:21:23.595862925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:23.597387 containerd[1449]: time="2024-07-02T00:21:23.597310518Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jul 2 00:21:23.598668 containerd[1449]: time="2024-07-02T00:21:23.598626098Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:23.601260 containerd[1449]: time="2024-07-02T00:21:23.601191338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:23.601849 containerd[1449]: time="2024-07-02T00:21:23.601812135Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.133853438s" Jul 2 00:21:23.601849 containerd[1449]: time="2024-07-02T00:21:23.601842786Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 00:21:26.592601 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:26.603440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:26.625860 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-7.scope)... Jul 2 00:21:26.625879 systemd[1]: Reloading... Jul 2 00:21:26.715650 zram_generator::config[2166]: No configuration found. Jul 2 00:21:27.283679 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:27.363087 systemd[1]: Reloading finished in 736 ms. Jul 2 00:21:27.416904 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:21:27.417018 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:21:27.417353 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:27.430636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:27.592860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:27.599420 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:21:27.662027 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:21:27.663180 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:21:27.663180 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:21:27.663180 kubelet[2211]: I0702 00:21:27.662659 2211 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:21:28.037453 kubelet[2211]: I0702 00:21:28.037392 2211 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:21:28.037453 kubelet[2211]: I0702 00:21:28.037437 2211 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:21:28.037740 kubelet[2211]: I0702 00:21:28.037716 2211 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:21:28.054363 kubelet[2211]: E0702 00:21:28.054319 2211 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.054637 kubelet[2211]: I0702 00:21:28.054354 2211 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:21:28.072429 kubelet[2211]: I0702 00:21:28.071863 2211 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:21:28.072429 kubelet[2211]: I0702 00:21:28.072174 2211 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:21:28.073003 kubelet[2211]: I0702 00:21:28.072940 2211 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:21:28.073742 kubelet[2211]: I0702 00:21:28.073708 2211 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:21:28.073742 kubelet[2211]: I0702 00:21:28.073740 2211 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:21:28.075878 kubelet[2211]: I0702 00:21:28.075821 2211 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:21:28.077864 kubelet[2211]: I0702 00:21:28.077812 2211 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:21:28.077864 kubelet[2211]: I0702 00:21:28.077861 2211 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:21:28.077962 kubelet[2211]: I0702 00:21:28.077904 2211 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:21:28.077962 kubelet[2211]: I0702 00:21:28.077933 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:21:28.079967 kubelet[2211]: I0702 00:21:28.079929 2211 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:21:28.081464 kubelet[2211]: W0702 00:21:28.081386 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.081464 kubelet[2211]: E0702 00:21:28.081436 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.081464 kubelet[2211]: W0702 00:21:28.081386 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.081464 kubelet[2211]: E0702 00:21:28.081463 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.082488 kubelet[2211]: W0702 00:21:28.082451 2211 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:21:28.083382 kubelet[2211]: I0702 00:21:28.083355 2211 server.go:1232] "Started kubelet" Jul 2 00:21:28.084989 kubelet[2211]: I0702 00:21:28.084852 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:21:28.086449 kubelet[2211]: I0702 00:21:28.086096 2211 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:21:28.086449 kubelet[2211]: I0702 00:21:28.086135 2211 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:21:28.086449 kubelet[2211]: I0702 00:21:28.086243 2211 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:21:28.086449 kubelet[2211]: I0702 00:21:28.086328 2211 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:21:28.086680 kubelet[2211]: W0702 00:21:28.086639 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.086718 kubelet[2211]: E0702 00:21:28.086696 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.087231 kubelet[2211]: I0702 00:21:28.087202 2211 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:21:28.089923 kubelet[2211]: E0702 00:21:28.087316 2211 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Jul 2 00:21:28.089923 kubelet[2211]: E0702 00:21:28.087449 2211 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:21:28.089923 kubelet[2211]: E0702 00:21:28.087472 2211 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:21:28.089923 kubelet[2211]: I0702 00:21:28.088464 2211 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:21:28.090016 kubelet[2211]: E0702 00:21:28.088810 2211 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3d76c89da499", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 21, 28, 83317913, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 21, 28, 83317913, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.95:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.95:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:21:28.090016 kubelet[2211]: I0702 00:21:28.089267 2211 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:21:28.123316 kubelet[2211]: I0702 00:21:28.123273 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:21:28.125190 kubelet[2211]: I0702 00:21:28.124909 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:21:28.125190 kubelet[2211]: I0702 00:21:28.124949 2211 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:21:28.125190 kubelet[2211]: I0702 00:21:28.124970 2211 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:21:28.125190 kubelet[2211]: E0702 00:21:28.125047 2211 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:21:28.126870 kubelet[2211]: W0702 00:21:28.125654 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.126870 kubelet[2211]: E0702 00:21:28.125734 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.128182 kubelet[2211]: I0702 00:21:28.127631 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:21:28.128182 kubelet[2211]: I0702 00:21:28.127651 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:21:28.128182 kubelet[2211]: I0702 00:21:28.127678 2211 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:21:28.187681 kubelet[2211]: I0702 00:21:28.187647 2211 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:21:28.188219 kubelet[2211]: E0702 00:21:28.188201 2211 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 2 00:21:28.225706 kubelet[2211]: E0702 00:21:28.225564 2211 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:21:28.288751 kubelet[2211]: E0702 00:21:28.288581 2211 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Jul 2 00:21:28.390726 kubelet[2211]: I0702 00:21:28.390676 2211 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:21:28.391055 kubelet[2211]: E0702 00:21:28.391026 2211 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 2 00:21:28.426344 kubelet[2211]: E0702 00:21:28.426222 2211 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:21:28.594482 kubelet[2211]: E0702 00:21:28.594194 2211 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de3d76c89da499", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 21, 28, 83317913, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 21, 28, 83317913, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.95:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.95:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:21:28.690141 kubelet[2211]: E0702 00:21:28.690055 2211 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Jul 2 00:21:28.793294 kubelet[2211]: I0702 00:21:28.793241 2211 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:21:28.793752 kubelet[2211]: E0702 00:21:28.793719 2211 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 2 00:21:28.826964 kubelet[2211]: E0702 00:21:28.826899 2211 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:21:28.854495 kubelet[2211]: I0702 00:21:28.854326 2211 policy_none.go:49] "None policy: Start" Jul 2 00:21:28.855208 kubelet[2211]: I0702 00:21:28.855188 2211 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:21:28.855292 kubelet[2211]: I0702 00:21:28.855232 2211 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:21:28.870603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:21:28.889245 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:21:28.893404 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:21:28.901353 kubelet[2211]: I0702 00:21:28.901302 2211 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:21:28.901883 kubelet[2211]: I0702 00:21:28.901667 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:21:28.902291 kubelet[2211]: E0702 00:21:28.902271 2211 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:21:28.928757 kubelet[2211]: W0702 00:21:28.928672 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:28.928757 kubelet[2211]: E0702 00:21:28.928742 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:29.286451 kubelet[2211]: W0702 00:21:29.286351 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:29.286451 kubelet[2211]: E0702 00:21:29.286429 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:29.411300 kubelet[2211]: W0702 00:21:29.411204 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:29.411300 kubelet[2211]: E0702 00:21:29.411291 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:29.487057 kubelet[2211]: W0702 00:21:29.486960 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:29.487057 kubelet[2211]: E0702 00:21:29.487040 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:29.491398 kubelet[2211]: E0702 00:21:29.491352 2211 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Jul 2 00:21:29.595375 kubelet[2211]: I0702 00:21:29.595234 2211 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:21:29.595636 kubelet[2211]: E0702 00:21:29.595614 2211 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 2 00:21:29.627881 kubelet[2211]: I0702 00:21:29.627821 2211 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:21:29.629307 kubelet[2211]: I0702 00:21:29.629284 2211 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:21:29.630217 kubelet[2211]: I0702 00:21:29.630177 2211 topology_manager.go:215] "Topology Admit Handler" podUID="b5419719f94957e1d7c1e1eeaa38f8e1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:21:29.637171 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jul 2 00:21:29.657653 systemd[1]: Created slice kubepods-burstable-podb5419719f94957e1d7c1e1eeaa38f8e1.slice - libcontainer container kubepods-burstable-podb5419719f94957e1d7c1e1eeaa38f8e1.slice. Jul 2 00:21:29.673512 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jul 2 00:21:29.695044 kubelet[2211]: I0702 00:21:29.694988 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:29.695044 kubelet[2211]: I0702 00:21:29.695040 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:29.695044 kubelet[2211]: I0702 00:21:29.695061 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:29.695585 kubelet[2211]: I0702 00:21:29.695087 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:21:29.695585 kubelet[2211]: I0702 00:21:29.695131 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5419719f94957e1d7c1e1eeaa38f8e1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b5419719f94957e1d7c1e1eeaa38f8e1\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:21:29.695585 kubelet[2211]: I0702 00:21:29.695152 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:29.695585 kubelet[2211]: I0702 00:21:29.695169 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:29.695585 kubelet[2211]: I0702 00:21:29.695192 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5419719f94957e1d7c1e1eeaa38f8e1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b5419719f94957e1d7c1e1eeaa38f8e1\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:21:29.695714 kubelet[2211]: I0702 00:21:29.695214 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5419719f94957e1d7c1e1eeaa38f8e1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b5419719f94957e1d7c1e1eeaa38f8e1\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:21:29.954643 kubelet[2211]: E0702 00:21:29.954589 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:29.955469 containerd[1449]: time="2024-07-02T00:21:29.955405187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 00:21:29.960797 kubelet[2211]: E0702 00:21:29.960756 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:29.961333 containerd[1449]: time="2024-07-02T00:21:29.961290426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b5419719f94957e1d7c1e1eeaa38f8e1,Namespace:kube-system,Attempt:0,}" Jul 2 00:21:29.976640 kubelet[2211]: E0702 00:21:29.976589 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:29.977043 containerd[1449]: time="2024-07-02T00:21:29.977001402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 00:21:30.065668 kubelet[2211]: E0702 00:21:30.065627 2211 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:30.794749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746144594.mount: Deactivated successfully. Jul 2 00:21:30.802183 containerd[1449]: time="2024-07-02T00:21:30.802103110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:21:30.803984 containerd[1449]: time="2024-07-02T00:21:30.803911365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:21:30.805308 containerd[1449]: time="2024-07-02T00:21:30.805261309Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:21:30.806258 containerd[1449]: time="2024-07-02T00:21:30.806222869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:21:30.807546 containerd[1449]: time="2024-07-02T00:21:30.807482176Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:21:30.808441 containerd[1449]: time="2024-07-02T00:21:30.808379669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:21:30.809208 containerd[1449]: time="2024-07-02T00:21:30.809156557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:21:30.812199 containerd[1449]: time="2024-07-02T00:21:30.812148559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:21:30.812837 containerd[1449]: time="2024-07-02T00:21:30.812792616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 851.370922ms" Jul 2 00:21:30.813681 containerd[1449]: time="2024-07-02T00:21:30.813638669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 836.564955ms" Jul 2 00:21:30.818525 containerd[1449]: time="2024-07-02T00:21:30.818459116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 862.933441ms" Jul 2 00:21:31.044029 containerd[1449]: time="2024-07-02T00:21:31.043684238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:31.044029 containerd[1449]: time="2024-07-02T00:21:31.043768844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.044029 containerd[1449]: time="2024-07-02T00:21:31.043905161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:31.044029 containerd[1449]: time="2024-07-02T00:21:31.043962213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.045890 containerd[1449]: time="2024-07-02T00:21:31.045572756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:31.045890 containerd[1449]: time="2024-07-02T00:21:31.045622904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.045890 containerd[1449]: time="2024-07-02T00:21:31.045641991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:31.045890 containerd[1449]: time="2024-07-02T00:21:31.045655187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.047567 containerd[1449]: time="2024-07-02T00:21:31.047085846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:31.047567 containerd[1449]: time="2024-07-02T00:21:31.047198097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.047567 containerd[1449]: time="2024-07-02T00:21:31.047223646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:31.047567 containerd[1449]: time="2024-07-02T00:21:31.047247593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:31.092819 kubelet[2211]: E0702 00:21:31.092772 2211 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="3.2s" Jul 2 00:21:31.096412 systemd[1]: Started cri-containerd-96a802e03916621481e831e9385ad9ae69ae7067d6dc806c4320825c77b431df.scope - libcontainer container 96a802e03916621481e831e9385ad9ae69ae7067d6dc806c4320825c77b431df. Jul 2 00:21:31.103993 systemd[1]: Started cri-containerd-2de5bc9a044d644a992ba60c676a373bbe168c031b399042ecf597bb9143a084.scope - libcontainer container 2de5bc9a044d644a992ba60c676a373bbe168c031b399042ecf597bb9143a084. Jul 2 00:21:31.106812 systemd[1]: Started cri-containerd-d2ac319722650a647837f27e59c3655b0425a21b12f613a42a16bc1858a40261.scope - libcontainer container d2ac319722650a647837f27e59c3655b0425a21b12f613a42a16bc1858a40261. Jul 2 00:21:31.186864 containerd[1449]: time="2024-07-02T00:21:31.186595764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b5419719f94957e1d7c1e1eeaa38f8e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"96a802e03916621481e831e9385ad9ae69ae7067d6dc806c4320825c77b431df\"" Jul 2 00:21:31.188425 containerd[1449]: time="2024-07-02T00:21:31.188194171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ac319722650a647837f27e59c3655b0425a21b12f613a42a16bc1858a40261\"" Jul 2 00:21:31.189915 kubelet[2211]: E0702 00:21:31.188717 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:31.189915 kubelet[2211]: E0702 00:21:31.189647 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:31.189915 kubelet[2211]: E0702 00:21:31.189874 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:31.190016 containerd[1449]: time="2024-07-02T00:21:31.189389949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2de5bc9a044d644a992ba60c676a373bbe168c031b399042ecf597bb9143a084\"" Jul 2 00:21:31.192225 containerd[1449]: time="2024-07-02T00:21:31.192199294Z" level=info msg="CreateContainer within sandbox \"d2ac319722650a647837f27e59c3655b0425a21b12f613a42a16bc1858a40261\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:21:31.192372 containerd[1449]: time="2024-07-02T00:21:31.192315983Z" level=info msg="CreateContainer within sandbox \"96a802e03916621481e831e9385ad9ae69ae7067d6dc806c4320825c77b431df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:21:31.192478 containerd[1449]: time="2024-07-02T00:21:31.192316204Z" level=info msg="CreateContainer within sandbox \"2de5bc9a044d644a992ba60c676a373bbe168c031b399042ecf597bb9143a084\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:21:31.196788 kubelet[2211]: I0702 00:21:31.196753 2211 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:21:31.197145 kubelet[2211]: E0702 00:21:31.197124 2211 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 2 00:21:31.476552 kubelet[2211]: W0702 00:21:31.476479 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:31.476552 kubelet[2211]: E0702 00:21:31.476546 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:31.481015 kubelet[2211]: W0702 00:21:31.480965 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:31.481015 kubelet[2211]: E0702 00:21:31.480999 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:31.701994 kubelet[2211]: W0702 00:21:31.701941 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:31.701994 kubelet[2211]: E0702 00:21:31.701993 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:31.796362 kubelet[2211]: W0702 00:21:31.796216 2211 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:31.796362 kubelet[2211]: E0702 00:21:31.796260 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:34.190789 containerd[1449]: time="2024-07-02T00:21:34.190705217Z" level=info msg="CreateContainer within sandbox \"96a802e03916621481e831e9385ad9ae69ae7067d6dc806c4320825c77b431df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"290937c476ff12340f69f3c111bf164f93c1d3eb67cfe3ad2385b74d835865e0\"" Jul 2 00:21:34.191558 containerd[1449]: time="2024-07-02T00:21:34.191529379Z" level=info msg="StartContainer for \"290937c476ff12340f69f3c111bf164f93c1d3eb67cfe3ad2385b74d835865e0\"" Jul 2 00:21:34.201376 containerd[1449]: time="2024-07-02T00:21:34.201301280Z" level=info msg="CreateContainer within sandbox \"2de5bc9a044d644a992ba60c676a373bbe168c031b399042ecf597bb9143a084\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72638105df9153b67868b1884cbe1a8628625268a17b9a45e67ad9e3e07da247\"" Jul 2 00:21:34.202303 containerd[1449]: time="2024-07-02T00:21:34.201931142Z" level=info msg="StartContainer for \"72638105df9153b67868b1884cbe1a8628625268a17b9a45e67ad9e3e07da247\"" Jul 2 00:21:34.202777 kubelet[2211]: E0702 00:21:34.202752 2211 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.95:6443: connect: connection refused Jul 2 00:21:34.208205 containerd[1449]: time="2024-07-02T00:21:34.208143652Z" level=info msg="CreateContainer within sandbox \"d2ac319722650a647837f27e59c3655b0425a21b12f613a42a16bc1858a40261\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"254c060c9aa251b42b411ed65eb08fd4858854f52c07180da67d48ca989f15e6\"" Jul 2 00:21:34.208930 containerd[1449]: time="2024-07-02T00:21:34.208895532Z" level=info msg="StartContainer for \"254c060c9aa251b42b411ed65eb08fd4858854f52c07180da67d48ca989f15e6\"" Jul 2 00:21:34.227289 systemd[1]: Started cri-containerd-290937c476ff12340f69f3c111bf164f93c1d3eb67cfe3ad2385b74d835865e0.scope - libcontainer container 290937c476ff12340f69f3c111bf164f93c1d3eb67cfe3ad2385b74d835865e0. Jul 2 00:21:34.238274 systemd[1]: Started cri-containerd-72638105df9153b67868b1884cbe1a8628625268a17b9a45e67ad9e3e07da247.scope - libcontainer container 72638105df9153b67868b1884cbe1a8628625268a17b9a45e67ad9e3e07da247. Jul 2 00:21:34.252255 systemd[1]: Started cri-containerd-254c060c9aa251b42b411ed65eb08fd4858854f52c07180da67d48ca989f15e6.scope - libcontainer container 254c060c9aa251b42b411ed65eb08fd4858854f52c07180da67d48ca989f15e6. Jul 2 00:21:34.293590 kubelet[2211]: E0702 00:21:34.293545 2211 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="6.4s" Jul 2 00:21:34.398582 kubelet[2211]: I0702 00:21:34.398543 2211 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:21:34.481582 containerd[1449]: time="2024-07-02T00:21:34.481448233Z" level=info msg="StartContainer for \"254c060c9aa251b42b411ed65eb08fd4858854f52c07180da67d48ca989f15e6\" returns successfully" Jul 2 00:21:34.481702 containerd[1449]: time="2024-07-02T00:21:34.481465647Z" level=info msg="StartContainer for \"290937c476ff12340f69f3c111bf164f93c1d3eb67cfe3ad2385b74d835865e0\" returns successfully" Jul 2 00:21:34.481702 containerd[1449]: time="2024-07-02T00:21:34.481477220Z" level=info msg="StartContainer for \"72638105df9153b67868b1884cbe1a8628625268a17b9a45e67ad9e3e07da247\" returns successfully" Jul 2 00:21:35.144429 kubelet[2211]: E0702 00:21:35.144382 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:35.145594 kubelet[2211]: E0702 00:21:35.145572 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:35.147147 kubelet[2211]: E0702 00:21:35.147123 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:35.868242 kubelet[2211]: I0702 00:21:35.868182 2211 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:21:35.875787 kubelet[2211]: E0702 00:21:35.875743 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:35.976849 kubelet[2211]: E0702 00:21:35.976783 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.077709 kubelet[2211]: E0702 00:21:36.077627 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.149865 kubelet[2211]: E0702 00:21:36.149744 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:36.149865 kubelet[2211]: E0702 00:21:36.149834 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:36.150023 kubelet[2211]: E0702 00:21:36.149907 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:36.178822 kubelet[2211]: E0702 00:21:36.178751 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.279526 kubelet[2211]: E0702 00:21:36.279454 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.380085 kubelet[2211]: E0702 00:21:36.380021 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.481033 kubelet[2211]: E0702 00:21:36.480919 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.581982 kubelet[2211]: E0702 00:21:36.581882 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.682823 kubelet[2211]: E0702 00:21:36.682722 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.783063 kubelet[2211]: E0702 00:21:36.782878 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.883953 kubelet[2211]: E0702 00:21:36.883814 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:36.985311 kubelet[2211]: E0702 00:21:36.985133 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:37.085813 kubelet[2211]: E0702 00:21:37.085630 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:37.154004 kubelet[2211]: E0702 00:21:37.153919 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:37.186928 kubelet[2211]: E0702 00:21:37.186835 2211 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 00:21:37.759789 kubelet[2211]: E0702 00:21:37.759343 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:38.083965 kubelet[2211]: I0702 00:21:38.083795 2211 apiserver.go:52] "Watching apiserver" Jul 2 00:21:38.086915 kubelet[2211]: I0702 00:21:38.086806 2211 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:21:38.153715 kubelet[2211]: E0702 00:21:38.153673 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:39.291841 systemd[1]: Reloading requested from client PID 2495 ('systemctl') (unit session-7.scope)... Jul 2 00:21:39.291859 systemd[1]: Reloading... Jul 2 00:21:39.372158 zram_generator::config[2535]: No configuration found. Jul 2 00:21:39.481748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:21:39.576486 systemd[1]: Reloading finished in 284 ms. Jul 2 00:21:39.626357 kubelet[2211]: I0702 00:21:39.626296 2211 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:21:39.626424 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:39.647562 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:21:39.647977 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:39.648034 systemd[1]: kubelet.service: Consumed 1.118s CPU time, 114.0M memory peak, 0B memory swap peak. Jul 2 00:21:39.655583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:21:39.798647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:21:39.809663 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:21:39.855976 kubelet[2577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:21:39.855976 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:21:39.855976 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:21:39.855976 kubelet[2577]: I0702 00:21:39.855915 2577 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:21:39.861346 kubelet[2577]: I0702 00:21:39.861300 2577 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:21:39.861346 kubelet[2577]: I0702 00:21:39.861333 2577 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:21:39.861607 kubelet[2577]: I0702 00:21:39.861582 2577 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:21:39.863540 kubelet[2577]: I0702 00:21:39.863512 2577 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:21:39.864769 kubelet[2577]: I0702 00:21:39.864743 2577 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:21:39.871797 kubelet[2577]: I0702 00:21:39.871758 2577 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:21:39.872012 kubelet[2577]: I0702 00:21:39.871987 2577 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:21:39.872191 kubelet[2577]: I0702 00:21:39.872163 2577 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:21:39.872191 kubelet[2577]: I0702 00:21:39.872187 2577 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:21:39.872360 kubelet[2577]: I0702 00:21:39.872197 2577 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:21:39.872360 kubelet[2577]: I0702 00:21:39.872243 2577 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:21:39.872360 kubelet[2577]: I0702 00:21:39.872348 2577 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:21:39.872360 kubelet[2577]: I0702 00:21:39.872362 2577 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:21:39.872557 kubelet[2577]: I0702 00:21:39.872389 2577 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:21:39.872557 kubelet[2577]: I0702 00:21:39.872400 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:21:39.875511 kubelet[2577]: I0702 00:21:39.874926 2577 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:21:39.875682 kubelet[2577]: I0702 00:21:39.875659 2577 server.go:1232] "Started kubelet" Jul 2 00:21:39.876526 kubelet[2577]: I0702 00:21:39.876501 2577 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:21:39.877429 kubelet[2577]: I0702 00:21:39.877242 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:21:39.877878 kubelet[2577]: I0702 00:21:39.877852 2577 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:21:39.878141 kubelet[2577]: I0702 00:21:39.878122 2577 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:21:39.881446 kubelet[2577]: I0702 00:21:39.881415 2577 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:21:39.882589 kubelet[2577]: I0702 00:21:39.882564 2577 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:21:39.884006 kubelet[2577]: I0702 00:21:39.883949 2577 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:21:39.884305 kubelet[2577]: I0702 00:21:39.884272 2577 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:21:39.886923 kubelet[2577]: E0702 00:21:39.886898 2577 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:21:39.886969 kubelet[2577]: E0702 00:21:39.886934 2577 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:21:39.898174 kubelet[2577]: I0702 00:21:39.898143 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:21:39.899941 kubelet[2577]: I0702 00:21:39.899808 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:21:39.899941 kubelet[2577]: I0702 00:21:39.899827 2577 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:21:39.899941 kubelet[2577]: I0702 00:21:39.899845 2577 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:21:39.899941 kubelet[2577]: E0702 00:21:39.899891 2577 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:21:39.935352 kubelet[2577]: I0702 00:21:39.935304 2577 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:21:39.935529 kubelet[2577]: I0702 00:21:39.935518 2577 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:21:39.935590 kubelet[2577]: I0702 00:21:39.935581 2577 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:21:39.935875 kubelet[2577]: I0702 00:21:39.935862 2577 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:21:39.935978 kubelet[2577]: I0702 00:21:39.935965 2577 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:21:39.936075 kubelet[2577]: I0702 00:21:39.936065 2577 policy_none.go:49] "None policy: Start" Jul 2 00:21:39.937203 kubelet[2577]: I0702 00:21:39.937176 2577 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:21:39.937253 kubelet[2577]: I0702 00:21:39.937226 2577 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:21:39.937514 kubelet[2577]: I0702 00:21:39.937476 2577 state_mem.go:75] "Updated machine memory state" Jul 2 00:21:39.942264 kubelet[2577]: I0702 00:21:39.942079 2577 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:21:39.942582 kubelet[2577]: I0702 00:21:39.942386 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:21:39.988313 kubelet[2577]: I0702 00:21:39.988257 2577 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 00:21:40.000435 kubelet[2577]: I0702 00:21:40.000381 2577 topology_manager.go:215] "Topology Admit Handler" podUID="b5419719f94957e1d7c1e1eeaa38f8e1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:21:40.000614 kubelet[2577]: I0702 00:21:40.000482 2577 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:21:40.000614 kubelet[2577]: I0702 00:21:40.000517 2577 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:21:40.185637 kubelet[2577]: I0702 00:21:40.185581 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:40.185637 kubelet[2577]: I0702 00:21:40.185632 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:40.185637 kubelet[2577]: I0702 00:21:40.185652 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:21:40.185842 kubelet[2577]: I0702 00:21:40.185671 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5419719f94957e1d7c1e1eeaa38f8e1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b5419719f94957e1d7c1e1eeaa38f8e1\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:21:40.185842 kubelet[2577]: I0702 00:21:40.185692 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5419719f94957e1d7c1e1eeaa38f8e1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b5419719f94957e1d7c1e1eeaa38f8e1\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:21:40.185842 kubelet[2577]: I0702 00:21:40.185713 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:40.185842 kubelet[2577]: I0702 00:21:40.185732 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5419719f94957e1d7c1e1eeaa38f8e1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b5419719f94957e1d7c1e1eeaa38f8e1\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:21:40.185842 kubelet[2577]: I0702 00:21:40.185752 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:40.185951 kubelet[2577]: I0702 00:21:40.185774 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:21:40.338930 kubelet[2577]: E0702 00:21:40.338891 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:40.339125 kubelet[2577]: E0702 00:21:40.338901 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:40.370095 kubelet[2577]: E0702 00:21:40.370018 2577 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 00:21:40.370095 kubelet[2577]: E0702 00:21:40.370373 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:40.376462 kubelet[2577]: I0702 00:21:40.376439 2577 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 00:21:40.377089 kubelet[2577]: I0702 00:21:40.377046 2577 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 00:21:40.873096 kubelet[2577]: I0702 00:21:40.873038 2577 apiserver.go:52] "Watching apiserver" Jul 2 00:21:40.884935 kubelet[2577]: I0702 00:21:40.884888 2577 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:21:40.908950 kubelet[2577]: E0702 00:21:40.908908 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:40.909136 kubelet[2577]: E0702 00:21:40.909067 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:41.096034 kubelet[2577]: E0702 00:21:41.095697 2577 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:21:41.096310 kubelet[2577]: E0702 00:21:41.096293 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:41.358758 kubelet[2577]: I0702 00:21:41.358272 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.358199936 podCreationTimestamp="2024-07-02 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:21:41.355821535 +0000 UTC m=+1.541429544" watchObservedRunningTime="2024-07-02 00:21:41.358199936 +0000 UTC m=+1.543807935" Jul 2 00:21:41.393625 kubelet[2577]: I0702 00:21:41.393563 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.393520023 podCreationTimestamp="2024-07-02 00:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:21:41.393240621 +0000 UTC m=+1.578848620" watchObservedRunningTime="2024-07-02 00:21:41.393520023 +0000 UTC m=+1.579128022" Jul 2 00:21:41.910995 kubelet[2577]: E0702 00:21:41.910179 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:46.272706 kubelet[2577]: E0702 00:21:46.272657 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:46.887328 kubelet[2577]: E0702 00:21:46.887273 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:46.915248 kubelet[2577]: E0702 00:21:46.915165 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:46.915420 kubelet[2577]: E0702 00:21:46.915255 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:47.455784 sudo[1629]: pam_unix(sudo:session): session closed for user root Jul 2 00:21:47.458711 sshd[1626]: pam_unix(sshd:session): session closed for user core Jul 2 00:21:47.464875 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:43460.service: Deactivated successfully. Jul 2 00:21:47.467876 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:21:47.468209 systemd[1]: session-7.scope: Consumed 5.521s CPU time, 135.7M memory peak, 0B memory swap peak. Jul 2 00:21:47.468860 systemd-logind[1431]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:21:47.470480 systemd-logind[1431]: Removed session 7. Jul 2 00:21:47.796194 kubelet[2577]: E0702 00:21:47.796048 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:47.916714 kubelet[2577]: E0702 00:21:47.916680 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:54.297728 kubelet[2577]: I0702 00:21:54.296654 2577 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:21:54.298221 containerd[1449]: time="2024-07-02T00:21:54.297837639Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:21:54.299330 kubelet[2577]: I0702 00:21:54.298681 2577 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:21:54.686510 kubelet[2577]: I0702 00:21:54.686428 2577 topology_manager.go:215] "Topology Admit Handler" podUID="0c9f0b97-47fc-4b8a-b714-46815ca10232" podNamespace="kube-system" podName="kube-proxy-4h422" Jul 2 00:21:54.695073 systemd[1]: Created slice kubepods-besteffort-pod0c9f0b97_47fc_4b8a_b714_46815ca10232.slice - libcontainer container kubepods-besteffort-pod0c9f0b97_47fc_4b8a_b714_46815ca10232.slice. Jul 2 00:21:54.871963 kubelet[2577]: I0702 00:21:54.871878 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c9f0b97-47fc-4b8a-b714-46815ca10232-kube-proxy\") pod \"kube-proxy-4h422\" (UID: \"0c9f0b97-47fc-4b8a-b714-46815ca10232\") " pod="kube-system/kube-proxy-4h422" Jul 2 00:21:54.871963 kubelet[2577]: I0702 00:21:54.871938 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgjf\" (UniqueName: \"kubernetes.io/projected/0c9f0b97-47fc-4b8a-b714-46815ca10232-kube-api-access-dzgjf\") pod \"kube-proxy-4h422\" (UID: \"0c9f0b97-47fc-4b8a-b714-46815ca10232\") " pod="kube-system/kube-proxy-4h422" Jul 2 00:21:54.871963 kubelet[2577]: I0702 00:21:54.871965 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c9f0b97-47fc-4b8a-b714-46815ca10232-xtables-lock\") pod \"kube-proxy-4h422\" (UID: \"0c9f0b97-47fc-4b8a-b714-46815ca10232\") " pod="kube-system/kube-proxy-4h422" Jul 2 00:21:54.871963 kubelet[2577]: I0702 00:21:54.871987 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c9f0b97-47fc-4b8a-b714-46815ca10232-lib-modules\") pod \"kube-proxy-4h422\" (UID: \"0c9f0b97-47fc-4b8a-b714-46815ca10232\") " pod="kube-system/kube-proxy-4h422" Jul 2 00:21:55.006889 kubelet[2577]: E0702 00:21:55.006740 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:55.007621 containerd[1449]: time="2024-07-02T00:21:55.007437474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4h422,Uid:0c9f0b97-47fc-4b8a-b714-46815ca10232,Namespace:kube-system,Attempt:0,}" Jul 2 00:21:55.062357 containerd[1449]: time="2024-07-02T00:21:55.062245009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:55.062357 containerd[1449]: time="2024-07-02T00:21:55.062309493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:55.062357 containerd[1449]: time="2024-07-02T00:21:55.062326406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:55.062357 containerd[1449]: time="2024-07-02T00:21:55.062336566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:55.090295 systemd[1]: Started cri-containerd-938ff9fcbbfb1734e6a440927a5bf54d634a6bae1fc2cd248a4f052db08fc2b6.scope - libcontainer container 938ff9fcbbfb1734e6a440927a5bf54d634a6bae1fc2cd248a4f052db08fc2b6. Jul 2 00:21:55.116786 containerd[1449]: time="2024-07-02T00:21:55.116715224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4h422,Uid:0c9f0b97-47fc-4b8a-b714-46815ca10232,Namespace:kube-system,Attempt:0,} returns sandbox id \"938ff9fcbbfb1734e6a440927a5bf54d634a6bae1fc2cd248a4f052db08fc2b6\"" Jul 2 00:21:55.117594 kubelet[2577]: E0702 00:21:55.117559 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:55.121083 containerd[1449]: time="2024-07-02T00:21:55.121046592Z" level=info msg="CreateContainer within sandbox \"938ff9fcbbfb1734e6a440927a5bf54d634a6bae1fc2cd248a4f052db08fc2b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:21:55.143481 containerd[1449]: time="2024-07-02T00:21:55.143419898Z" level=info msg="CreateContainer within sandbox \"938ff9fcbbfb1734e6a440927a5bf54d634a6bae1fc2cd248a4f052db08fc2b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f890e9ad3147a2ed4123897fd7a101f87834c0f7f2f5c50ca08e1b039355d92\"" Jul 2 00:21:55.144080 containerd[1449]: time="2024-07-02T00:21:55.144047308Z" level=info msg="StartContainer for \"3f890e9ad3147a2ed4123897fd7a101f87834c0f7f2f5c50ca08e1b039355d92\"" Jul 2 00:21:55.174399 systemd[1]: Started cri-containerd-3f890e9ad3147a2ed4123897fd7a101f87834c0f7f2f5c50ca08e1b039355d92.scope - libcontainer container 3f890e9ad3147a2ed4123897fd7a101f87834c0f7f2f5c50ca08e1b039355d92. Jul 2 00:21:55.212140 kubelet[2577]: I0702 00:21:55.210403 2577 topology_manager.go:215] "Topology Admit Handler" podUID="d39eeac8-30fb-4b19-b53d-3b098e568a42" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-8j425" Jul 2 00:21:55.219773 containerd[1449]: time="2024-07-02T00:21:55.219729446Z" level=info msg="StartContainer for \"3f890e9ad3147a2ed4123897fd7a101f87834c0f7f2f5c50ca08e1b039355d92\" returns successfully" Jul 2 00:21:55.226581 systemd[1]: Created slice kubepods-besteffort-podd39eeac8_30fb_4b19_b53d_3b098e568a42.slice - libcontainer container kubepods-besteffort-podd39eeac8_30fb_4b19_b53d_3b098e568a42.slice. Jul 2 00:21:55.274952 kubelet[2577]: I0702 00:21:55.274817 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d39eeac8-30fb-4b19-b53d-3b098e568a42-var-lib-calico\") pod \"tigera-operator-76c4974c85-8j425\" (UID: \"d39eeac8-30fb-4b19-b53d-3b098e568a42\") " pod="tigera-operator/tigera-operator-76c4974c85-8j425" Jul 2 00:21:55.274952 kubelet[2577]: I0702 00:21:55.274928 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852sg\" (UniqueName: \"kubernetes.io/projected/d39eeac8-30fb-4b19-b53d-3b098e568a42-kube-api-access-852sg\") pod \"tigera-operator-76c4974c85-8j425\" (UID: \"d39eeac8-30fb-4b19-b53d-3b098e568a42\") " pod="tigera-operator/tigera-operator-76c4974c85-8j425" Jul 2 00:21:55.529733 containerd[1449]: time="2024-07-02T00:21:55.529505762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8j425,Uid:d39eeac8-30fb-4b19-b53d-3b098e568a42,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:21:55.557181 containerd[1449]: time="2024-07-02T00:21:55.557031490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:21:55.557359 containerd[1449]: time="2024-07-02T00:21:55.557245794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:55.557359 containerd[1449]: time="2024-07-02T00:21:55.557285901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:21:55.557359 containerd[1449]: time="2024-07-02T00:21:55.557308264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:21:55.579284 systemd[1]: Started cri-containerd-5c86ca8d3bd33706b3e01e89955a029bd16b9d76cf4e589173ee59f6161a1d01.scope - libcontainer container 5c86ca8d3bd33706b3e01e89955a029bd16b9d76cf4e589173ee59f6161a1d01. Jul 2 00:21:55.619806 containerd[1449]: time="2024-07-02T00:21:55.619752256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8j425,Uid:d39eeac8-30fb-4b19-b53d-3b098e568a42,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5c86ca8d3bd33706b3e01e89955a029bd16b9d76cf4e589173ee59f6161a1d01\"" Jul 2 00:21:55.621560 containerd[1449]: time="2024-07-02T00:21:55.621533043Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:21:55.931176 kubelet[2577]: E0702 00:21:55.931034 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:21:55.953895 kubelet[2577]: I0702 00:21:55.953713 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4h422" podStartSLOduration=1.953675179 podCreationTimestamp="2024-07-02 00:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:21:55.953498528 +0000 UTC m=+16.139106517" watchObservedRunningTime="2024-07-02 00:21:55.953675179 +0000 UTC m=+16.139283178" Jul 2 00:21:56.005842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466876719.mount: Deactivated successfully. Jul 2 00:21:57.627652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493741583.mount: Deactivated successfully. Jul 2 00:21:57.986165 containerd[1449]: time="2024-07-02T00:21:57.986083678Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:57.987168 containerd[1449]: time="2024-07-02T00:21:57.987128655Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076056" Jul 2 00:21:57.989349 containerd[1449]: time="2024-07-02T00:21:57.989319880Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:57.991916 containerd[1449]: time="2024-07-02T00:21:57.991875319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:21:57.993053 containerd[1449]: time="2024-07-02T00:21:57.993013565Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.371445424s" Jul 2 00:21:57.993053 containerd[1449]: time="2024-07-02T00:21:57.993046247Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 00:21:57.994557 containerd[1449]: time="2024-07-02T00:21:57.994521082Z" level=info msg="CreateContainer within sandbox \"5c86ca8d3bd33706b3e01e89955a029bd16b9d76cf4e589173ee59f6161a1d01\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:21:58.008426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620932878.mount: Deactivated successfully. Jul 2 00:21:58.009509 containerd[1449]: time="2024-07-02T00:21:58.009457956Z" level=info msg="CreateContainer within sandbox \"5c86ca8d3bd33706b3e01e89955a029bd16b9d76cf4e589173ee59f6161a1d01\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ac50535eb20f0a338726e46f2730dea34a089c671e87813f80901bfc5393a20a\"" Jul 2 00:21:58.010132 containerd[1449]: time="2024-07-02T00:21:58.009997917Z" level=info msg="StartContainer for \"ac50535eb20f0a338726e46f2730dea34a089c671e87813f80901bfc5393a20a\"" Jul 2 00:21:58.042274 systemd[1]: Started cri-containerd-ac50535eb20f0a338726e46f2730dea34a089c671e87813f80901bfc5393a20a.scope - libcontainer container ac50535eb20f0a338726e46f2730dea34a089c671e87813f80901bfc5393a20a. Jul 2 00:21:58.114078 containerd[1449]: time="2024-07-02T00:21:58.113980694Z" level=info msg="StartContainer for \"ac50535eb20f0a338726e46f2730dea34a089c671e87813f80901bfc5393a20a\" returns successfully" Jul 2 00:21:58.944060 kubelet[2577]: I0702 00:21:58.943997 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-8j425" podStartSLOduration=1.5715968550000001 podCreationTimestamp="2024-07-02 00:21:55 +0000 UTC" firstStartedPulling="2024-07-02 00:21:55.620947372 +0000 UTC m=+15.806555381" lastFinishedPulling="2024-07-02 00:21:57.993295788 +0000 UTC m=+18.178903777" observedRunningTime="2024-07-02 00:21:58.943801684 +0000 UTC m=+19.129409683" watchObservedRunningTime="2024-07-02 00:21:58.943945251 +0000 UTC m=+19.129553250" Jul 2 00:22:00.790441 kubelet[2577]: I0702 00:22:00.789700 2577 topology_manager.go:215] "Topology Admit Handler" podUID="9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe" podNamespace="calico-system" podName="calico-typha-6445676fdc-dsxt4" Jul 2 00:22:00.808448 systemd[1]: Created slice kubepods-besteffort-pod9a29ecc3_3bc8_409a_ac79_deb3fa5dadfe.slice - libcontainer container kubepods-besteffort-pod9a29ecc3_3bc8_409a_ac79_deb3fa5dadfe.slice. Jul 2 00:22:00.809432 kubelet[2577]: I0702 00:22:00.809392 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe-typha-certs\") pod \"calico-typha-6445676fdc-dsxt4\" (UID: \"9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe\") " pod="calico-system/calico-typha-6445676fdc-dsxt4" Jul 2 00:22:00.809510 kubelet[2577]: I0702 00:22:00.809439 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hqmz\" (UniqueName: \"kubernetes.io/projected/9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe-kube-api-access-6hqmz\") pod \"calico-typha-6445676fdc-dsxt4\" (UID: \"9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe\") " pod="calico-system/calico-typha-6445676fdc-dsxt4" Jul 2 00:22:00.809510 kubelet[2577]: I0702 00:22:00.809463 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe-tigera-ca-bundle\") pod \"calico-typha-6445676fdc-dsxt4\" (UID: \"9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe\") " pod="calico-system/calico-typha-6445676fdc-dsxt4" Jul 2 00:22:00.837809 kubelet[2577]: I0702 00:22:00.837737 2577 topology_manager.go:215] "Topology Admit Handler" podUID="e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b" podNamespace="calico-system" podName="calico-node-kbr7m" Jul 2 00:22:00.847470 systemd[1]: Created slice kubepods-besteffort-pode1bf24d7_b1bc_4730_95ae_6880ff8a9f9b.slice - libcontainer container kubepods-besteffort-pode1bf24d7_b1bc_4730_95ae_6880ff8a9f9b.slice. Jul 2 00:22:00.910392 kubelet[2577]: I0702 00:22:00.910318 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-cni-net-dir\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.910392 kubelet[2577]: I0702 00:22:00.910393 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q6b\" (UniqueName: \"kubernetes.io/projected/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-kube-api-access-64q6b\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911214 kubelet[2577]: I0702 00:22:00.910592 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-var-lib-calico\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911214 kubelet[2577]: I0702 00:22:00.910775 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-cni-bin-dir\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911214 kubelet[2577]: I0702 00:22:00.910825 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-xtables-lock\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911214 kubelet[2577]: I0702 00:22:00.910870 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-node-certs\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911214 kubelet[2577]: I0702 00:22:00.910957 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-var-run-calico\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911590 kubelet[2577]: I0702 00:22:00.911010 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-cni-log-dir\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911590 kubelet[2577]: I0702 00:22:00.911056 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-policysync\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911590 kubelet[2577]: I0702 00:22:00.911183 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-lib-modules\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911590 kubelet[2577]: I0702 00:22:00.911303 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-tigera-ca-bundle\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.911590 kubelet[2577]: I0702 00:22:00.911332 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b-flexvol-driver-host\") pod \"calico-node-kbr7m\" (UID: \"e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b\") " pod="calico-system/calico-node-kbr7m" Jul 2 00:22:00.949463 kubelet[2577]: I0702 00:22:00.949409 2577 topology_manager.go:215] "Topology Admit Handler" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" podNamespace="calico-system" podName="csi-node-driver-c6f8d" Jul 2 00:22:00.949769 kubelet[2577]: E0702 00:22:00.949744 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:01.011580 kubelet[2577]: I0702 00:22:01.011530 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbdpf\" (UniqueName: \"kubernetes.io/projected/bb1123fc-5807-4f3a-afae-4341c695ae1f-kube-api-access-dbdpf\") pod \"csi-node-driver-c6f8d\" (UID: \"bb1123fc-5807-4f3a-afae-4341c695ae1f\") " pod="calico-system/csi-node-driver-c6f8d" Jul 2 00:22:01.011778 kubelet[2577]: I0702 00:22:01.011623 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bb1123fc-5807-4f3a-afae-4341c695ae1f-socket-dir\") pod \"csi-node-driver-c6f8d\" (UID: \"bb1123fc-5807-4f3a-afae-4341c695ae1f\") " pod="calico-system/csi-node-driver-c6f8d" Jul 2 00:22:01.013202 kubelet[2577]: E0702 00:22:01.013175 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.013202 kubelet[2577]: W0702 00:22:01.013198 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.013310 kubelet[2577]: E0702 00:22:01.013237 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.013487 kubelet[2577]: E0702 00:22:01.013461 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.013545 kubelet[2577]: W0702 00:22:01.013482 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.013545 kubelet[2577]: E0702 00:22:01.013512 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.016956 kubelet[2577]: E0702 00:22:01.016691 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.016956 kubelet[2577]: W0702 00:22:01.016726 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.016956 kubelet[2577]: E0702 00:22:01.016752 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.017579 kubelet[2577]: E0702 00:22:01.017374 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.017579 kubelet[2577]: W0702 00:22:01.017399 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.017713 kubelet[2577]: E0702 00:22:01.017669 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.018221 kubelet[2577]: E0702 00:22:01.018199 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.018221 kubelet[2577]: W0702 00:22:01.018219 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.018383 kubelet[2577]: E0702 00:22:01.018361 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.018694 kubelet[2577]: E0702 00:22:01.018676 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.018694 kubelet[2577]: W0702 00:22:01.018687 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.018797 kubelet[2577]: E0702 00:22:01.018732 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.019308 kubelet[2577]: E0702 00:22:01.019277 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.019308 kubelet[2577]: W0702 00:22:01.019292 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.019397 kubelet[2577]: E0702 00:22:01.019348 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.019591 kubelet[2577]: E0702 00:22:01.019568 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.019591 kubelet[2577]: W0702 00:22:01.019587 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.019674 kubelet[2577]: E0702 00:22:01.019622 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.019879 kubelet[2577]: E0702 00:22:01.019845 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.019879 kubelet[2577]: W0702 00:22:01.019860 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.020248 kubelet[2577]: E0702 00:22:01.020005 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.020248 kubelet[2577]: E0702 00:22:01.020175 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.020248 kubelet[2577]: W0702 00:22:01.020215 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.020439 kubelet[2577]: E0702 00:22:01.020305 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.020531 kubelet[2577]: E0702 00:22:01.020510 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.020531 kubelet[2577]: W0702 00:22:01.020525 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.020625 kubelet[2577]: E0702 00:22:01.020610 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.020910 kubelet[2577]: E0702 00:22:01.020891 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.020954 kubelet[2577]: W0702 00:22:01.020933 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.021132 kubelet[2577]: E0702 00:22:01.021088 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.021211 kubelet[2577]: I0702 00:22:01.021172 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bb1123fc-5807-4f3a-afae-4341c695ae1f-varrun\") pod \"csi-node-driver-c6f8d\" (UID: \"bb1123fc-5807-4f3a-afae-4341c695ae1f\") " pod="calico-system/csi-node-driver-c6f8d" Jul 2 00:22:01.021315 kubelet[2577]: E0702 00:22:01.021299 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.021315 kubelet[2577]: W0702 00:22:01.021313 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.021438 kubelet[2577]: E0702 00:22:01.021423 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.021570 kubelet[2577]: E0702 00:22:01.021553 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.021570 kubelet[2577]: W0702 00:22:01.021569 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.021712 kubelet[2577]: E0702 00:22:01.021685 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.021818 kubelet[2577]: E0702 00:22:01.021804 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.021818 kubelet[2577]: W0702 00:22:01.021815 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.021869 kubelet[2577]: E0702 00:22:01.021856 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.022127 kubelet[2577]: E0702 00:22:01.022116 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.022185 kubelet[2577]: W0702 00:22:01.022129 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.022310 kubelet[2577]: E0702 00:22:01.022291 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.022451 kubelet[2577]: E0702 00:22:01.022426 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.022451 kubelet[2577]: W0702 00:22:01.022442 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.022587 kubelet[2577]: E0702 00:22:01.022558 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.022731 kubelet[2577]: E0702 00:22:01.022713 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.022731 kubelet[2577]: W0702 00:22:01.022728 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.022979 kubelet[2577]: E0702 00:22:01.022826 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.023046 kubelet[2577]: E0702 00:22:01.023009 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.023046 kubelet[2577]: W0702 00:22:01.023032 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.023646 kubelet[2577]: E0702 00:22:01.023621 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.024266 kubelet[2577]: E0702 00:22:01.023924 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.024266 kubelet[2577]: W0702 00:22:01.023938 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.024266 kubelet[2577]: E0702 00:22:01.024055 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.024767 kubelet[2577]: E0702 00:22:01.024736 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.024767 kubelet[2577]: W0702 00:22:01.024752 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.024853 kubelet[2577]: E0702 00:22:01.024789 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.025045 kubelet[2577]: E0702 00:22:01.025004 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.025045 kubelet[2577]: W0702 00:22:01.025031 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.025220 kubelet[2577]: E0702 00:22:01.025198 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.025499 kubelet[2577]: E0702 00:22:01.025479 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.025499 kubelet[2577]: W0702 00:22:01.025494 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.025778 kubelet[2577]: E0702 00:22:01.025737 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.025969 kubelet[2577]: E0702 00:22:01.025950 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.025969 kubelet[2577]: W0702 00:22:01.025964 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.026079 kubelet[2577]: E0702 00:22:01.026061 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.026401 kubelet[2577]: E0702 00:22:01.026368 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.026401 kubelet[2577]: W0702 00:22:01.026387 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.026494 kubelet[2577]: E0702 00:22:01.026476 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.026766 kubelet[2577]: E0702 00:22:01.026745 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.026766 kubelet[2577]: W0702 00:22:01.026759 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.026887 kubelet[2577]: E0702 00:22:01.026879 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.027034 kubelet[2577]: E0702 00:22:01.027004 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.027189 kubelet[2577]: W0702 00:22:01.027033 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.027233 kubelet[2577]: E0702 00:22:01.027194 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.028398 kubelet[2577]: E0702 00:22:01.027475 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.028398 kubelet[2577]: W0702 00:22:01.027491 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.028398 kubelet[2577]: E0702 00:22:01.027716 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.028398 kubelet[2577]: E0702 00:22:01.028223 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.028398 kubelet[2577]: W0702 00:22:01.028240 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.028605 kubelet[2577]: E0702 00:22:01.028423 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.028825 kubelet[2577]: E0702 00:22:01.028800 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.028825 kubelet[2577]: W0702 00:22:01.028818 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.028985 kubelet[2577]: E0702 00:22:01.028862 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.029173 kubelet[2577]: E0702 00:22:01.029148 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.029173 kubelet[2577]: W0702 00:22:01.029165 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.029253 kubelet[2577]: E0702 00:22:01.029208 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.029856 kubelet[2577]: E0702 00:22:01.029821 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.029856 kubelet[2577]: W0702 00:22:01.029843 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.029935 kubelet[2577]: E0702 00:22:01.029886 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.030272 kubelet[2577]: E0702 00:22:01.030222 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.030272 kubelet[2577]: W0702 00:22:01.030268 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.030362 kubelet[2577]: E0702 00:22:01.030297 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.030716 kubelet[2577]: E0702 00:22:01.030689 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.030716 kubelet[2577]: W0702 00:22:01.030709 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.030786 kubelet[2577]: E0702 00:22:01.030764 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.031306 kubelet[2577]: E0702 00:22:01.031279 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.031306 kubelet[2577]: W0702 00:22:01.031299 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.031384 kubelet[2577]: E0702 00:22:01.031322 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.031384 kubelet[2577]: I0702 00:22:01.031354 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb1123fc-5807-4f3a-afae-4341c695ae1f-kubelet-dir\") pod \"csi-node-driver-c6f8d\" (UID: \"bb1123fc-5807-4f3a-afae-4341c695ae1f\") " pod="calico-system/csi-node-driver-c6f8d" Jul 2 00:22:01.031667 kubelet[2577]: E0702 00:22:01.031638 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.031667 kubelet[2577]: W0702 00:22:01.031660 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.031736 kubelet[2577]: E0702 00:22:01.031686 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.031736 kubelet[2577]: I0702 00:22:01.031711 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bb1123fc-5807-4f3a-afae-4341c695ae1f-registration-dir\") pod \"csi-node-driver-c6f8d\" (UID: \"bb1123fc-5807-4f3a-afae-4341c695ae1f\") " pod="calico-system/csi-node-driver-c6f8d" Jul 2 00:22:01.032173 kubelet[2577]: E0702 00:22:01.032096 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.032173 kubelet[2577]: W0702 00:22:01.032157 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.032270 kubelet[2577]: E0702 00:22:01.032190 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.032800 kubelet[2577]: E0702 00:22:01.032774 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.032800 kubelet[2577]: W0702 00:22:01.032792 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.032883 kubelet[2577]: E0702 00:22:01.032816 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.033348 kubelet[2577]: E0702 00:22:01.033283 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.033348 kubelet[2577]: W0702 00:22:01.033313 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.033498 kubelet[2577]: E0702 00:22:01.033464 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.034178 kubelet[2577]: E0702 00:22:01.034094 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.034178 kubelet[2577]: W0702 00:22:01.034148 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.034440 kubelet[2577]: E0702 00:22:01.034424 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.037601 kubelet[2577]: E0702 00:22:01.036080 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.037654 kubelet[2577]: W0702 00:22:01.037610 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.037691 kubelet[2577]: E0702 00:22:01.037658 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.038064 kubelet[2577]: E0702 00:22:01.038034 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.038064 kubelet[2577]: W0702 00:22:01.038056 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.038295 kubelet[2577]: E0702 00:22:01.038264 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.039087 kubelet[2577]: E0702 00:22:01.039059 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.039087 kubelet[2577]: W0702 00:22:01.039079 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.039190 kubelet[2577]: E0702 00:22:01.039154 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.039548 kubelet[2577]: E0702 00:22:01.039518 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.039548 kubelet[2577]: W0702 00:22:01.039538 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.039663 kubelet[2577]: E0702 00:22:01.039637 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.039925 kubelet[2577]: E0702 00:22:01.039896 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.039925 kubelet[2577]: W0702 00:22:01.039925 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.040071 kubelet[2577]: E0702 00:22:01.040047 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.040686 kubelet[2577]: E0702 00:22:01.040216 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.040686 kubelet[2577]: W0702 00:22:01.040232 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.040686 kubelet[2577]: E0702 00:22:01.040349 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.040686 kubelet[2577]: E0702 00:22:01.040496 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.040686 kubelet[2577]: W0702 00:22:01.040507 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.040686 kubelet[2577]: E0702 00:22:01.040527 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.042152 kubelet[2577]: E0702 00:22:01.041017 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.042152 kubelet[2577]: W0702 00:22:01.041047 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.042152 kubelet[2577]: E0702 00:22:01.041172 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.042152 kubelet[2577]: E0702 00:22:01.041347 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.042152 kubelet[2577]: W0702 00:22:01.041356 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.042152 kubelet[2577]: E0702 00:22:01.041452 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.042152 kubelet[2577]: E0702 00:22:01.041842 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.042152 kubelet[2577]: W0702 00:22:01.041851 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.042152 kubelet[2577]: E0702 00:22:01.042002 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.042433 kubelet[2577]: E0702 00:22:01.042253 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.042433 kubelet[2577]: W0702 00:22:01.042263 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.042433 kubelet[2577]: E0702 00:22:01.042282 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.043421 kubelet[2577]: E0702 00:22:01.043389 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.043421 kubelet[2577]: W0702 00:22:01.043411 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.043515 kubelet[2577]: E0702 00:22:01.043427 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.043914 kubelet[2577]: E0702 00:22:01.043880 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.043997 kubelet[2577]: W0702 00:22:01.043970 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.044043 kubelet[2577]: E0702 00:22:01.044031 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.114496 kubelet[2577]: E0702 00:22:01.114437 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:01.117258 containerd[1449]: time="2024-07-02T00:22:01.115292884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6445676fdc-dsxt4,Uid:9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe,Namespace:calico-system,Attempt:0,}" Jul 2 00:22:01.142878 kubelet[2577]: E0702 00:22:01.142829 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.142878 kubelet[2577]: W0702 00:22:01.142856 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.142878 kubelet[2577]: E0702 00:22:01.142879 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.143254 kubelet[2577]: E0702 00:22:01.143205 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.143254 kubelet[2577]: W0702 00:22:01.143221 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.143254 kubelet[2577]: E0702 00:22:01.143241 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.143583 kubelet[2577]: E0702 00:22:01.143560 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.143830 kubelet[2577]: W0702 00:22:01.143646 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.143830 kubelet[2577]: E0702 00:22:01.143686 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.144212 kubelet[2577]: E0702 00:22:01.144197 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.144354 kubelet[2577]: W0702 00:22:01.144271 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.144354 kubelet[2577]: E0702 00:22:01.144298 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.144731 kubelet[2577]: E0702 00:22:01.144699 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.144731 kubelet[2577]: W0702 00:22:01.144728 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.144835 kubelet[2577]: E0702 00:22:01.144766 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.145131 kubelet[2577]: E0702 00:22:01.145089 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.145131 kubelet[2577]: W0702 00:22:01.145119 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.145207 kubelet[2577]: E0702 00:22:01.145169 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.145492 kubelet[2577]: E0702 00:22:01.145460 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.145492 kubelet[2577]: W0702 00:22:01.145479 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.145599 kubelet[2577]: E0702 00:22:01.145567 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.145801 kubelet[2577]: E0702 00:22:01.145778 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.145801 kubelet[2577]: W0702 00:22:01.145794 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.145891 kubelet[2577]: E0702 00:22:01.145877 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.146158 kubelet[2577]: E0702 00:22:01.146140 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.146359 kubelet[2577]: W0702 00:22:01.146238 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.146359 kubelet[2577]: E0702 00:22:01.146335 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.146638 kubelet[2577]: E0702 00:22:01.146605 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.146638 kubelet[2577]: W0702 00:22:01.146619 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.146831 kubelet[2577]: E0702 00:22:01.146792 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.146991 kubelet[2577]: E0702 00:22:01.146972 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.146991 kubelet[2577]: W0702 00:22:01.146986 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.147212 kubelet[2577]: E0702 00:22:01.147147 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.147342 kubelet[2577]: E0702 00:22:01.147327 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.147342 kubelet[2577]: W0702 00:22:01.147339 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.147431 kubelet[2577]: E0702 00:22:01.147378 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.147601 kubelet[2577]: E0702 00:22:01.147585 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.147601 kubelet[2577]: W0702 00:22:01.147598 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.147735 kubelet[2577]: E0702 00:22:01.147715 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.147827 kubelet[2577]: E0702 00:22:01.147813 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.147827 kubelet[2577]: W0702 00:22:01.147824 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.147909 kubelet[2577]: E0702 00:22:01.147857 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.148082 kubelet[2577]: E0702 00:22:01.148066 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.148082 kubelet[2577]: W0702 00:22:01.148081 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.148269 kubelet[2577]: E0702 00:22:01.148253 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.148367 kubelet[2577]: E0702 00:22:01.148353 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.148367 kubelet[2577]: W0702 00:22:01.148367 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.148445 kubelet[2577]: E0702 00:22:01.148400 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.148659 kubelet[2577]: E0702 00:22:01.148644 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.148659 kubelet[2577]: W0702 00:22:01.148655 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.148745 kubelet[2577]: E0702 00:22:01.148685 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.148956 kubelet[2577]: E0702 00:22:01.148941 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.148956 kubelet[2577]: W0702 00:22:01.148953 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.149052 kubelet[2577]: E0702 00:22:01.148999 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.149900 kubelet[2577]: E0702 00:22:01.149246 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.149900 kubelet[2577]: W0702 00:22:01.149260 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.149900 kubelet[2577]: E0702 00:22:01.149363 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.149900 kubelet[2577]: E0702 00:22:01.149467 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.149900 kubelet[2577]: W0702 00:22:01.149474 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.149900 kubelet[2577]: E0702 00:22:01.149510 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.149900 kubelet[2577]: E0702 00:22:01.149679 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.149900 kubelet[2577]: W0702 00:22:01.149688 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.149900 kubelet[2577]: E0702 00:22:01.149809 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.150275 kubelet[2577]: E0702 00:22:01.149927 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.150275 kubelet[2577]: W0702 00:22:01.149935 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.150275 kubelet[2577]: E0702 00:22:01.149961 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.150275 kubelet[2577]: E0702 00:22:01.150241 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.150275 kubelet[2577]: W0702 00:22:01.150250 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.150275 kubelet[2577]: E0702 00:22:01.150263 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.150676 kubelet[2577]: E0702 00:22:01.150643 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:01.150972 kubelet[2577]: E0702 00:22:01.150955 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.150972 kubelet[2577]: W0702 00:22:01.150968 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.151120 kubelet[2577]: E0702 00:22:01.150983 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.151498 containerd[1449]: time="2024-07-02T00:22:01.151458942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbr7m,Uid:e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b,Namespace:calico-system,Attempt:0,}" Jul 2 00:22:01.167327 kubelet[2577]: E0702 00:22:01.167228 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.167327 kubelet[2577]: W0702 00:22:01.167254 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.167327 kubelet[2577]: E0702 00:22:01.167282 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.247811 kubelet[2577]: E0702 00:22:01.247763 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.247811 kubelet[2577]: W0702 00:22:01.247792 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.247811 kubelet[2577]: E0702 00:22:01.247828 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.319680 kubelet[2577]: E0702 00:22:01.319526 2577 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:01.319680 kubelet[2577]: W0702 00:22:01.319556 2577 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:01.319680 kubelet[2577]: E0702 00:22:01.319583 2577 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:01.366701 containerd[1449]: time="2024-07-02T00:22:01.366578358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:01.366701 containerd[1449]: time="2024-07-02T00:22:01.366670897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:01.367073 containerd[1449]: time="2024-07-02T00:22:01.366696586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:01.367073 containerd[1449]: time="2024-07-02T00:22:01.366725482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:01.371783 containerd[1449]: time="2024-07-02T00:22:01.371384920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:01.371783 containerd[1449]: time="2024-07-02T00:22:01.371461256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:01.371783 containerd[1449]: time="2024-07-02T00:22:01.371481476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:01.371783 containerd[1449]: time="2024-07-02T00:22:01.371508297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:01.395686 systemd[1]: Started cri-containerd-fb86bc102a5564407100e2a9c55ef41fd2a173d4cf40b6fc727142e084fa352e.scope - libcontainer container fb86bc102a5564407100e2a9c55ef41fd2a173d4cf40b6fc727142e084fa352e. Jul 2 00:22:01.402351 systemd[1]: Started cri-containerd-19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295.scope - libcontainer container 19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295. Jul 2 00:22:01.438580 containerd[1449]: time="2024-07-02T00:22:01.438513558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbr7m,Uid:e1bf24d7-b1bc-4730-95ae-6880ff8a9f9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295\"" Jul 2 00:22:01.440164 kubelet[2577]: E0702 00:22:01.439564 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:01.442764 containerd[1449]: time="2024-07-02T00:22:01.442707669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:22:01.454273 containerd[1449]: time="2024-07-02T00:22:01.454234089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6445676fdc-dsxt4,Uid:9a29ecc3-3bc8-409a-ac79-deb3fa5dadfe,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb86bc102a5564407100e2a9c55ef41fd2a173d4cf40b6fc727142e084fa352e\"" Jul 2 00:22:01.456646 kubelet[2577]: E0702 00:22:01.456476 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:02.900835 kubelet[2577]: E0702 00:22:02.900772 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:03.272164 containerd[1449]: time="2024-07-02T00:22:03.272084128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:03.273091 containerd[1449]: time="2024-07-02T00:22:03.273032102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 00:22:03.274619 containerd[1449]: time="2024-07-02T00:22:03.274577116Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:03.277177 containerd[1449]: time="2024-07-02T00:22:03.277135458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:03.278228 containerd[1449]: time="2024-07-02T00:22:03.277808572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.835051328s" Jul 2 00:22:03.278228 containerd[1449]: time="2024-07-02T00:22:03.278131312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:22:03.282906 containerd[1449]: time="2024-07-02T00:22:03.282856495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:22:03.283767 containerd[1449]: time="2024-07-02T00:22:03.283723907Z" level=info msg="CreateContainer within sandbox \"19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:22:03.305063 containerd[1449]: time="2024-07-02T00:22:03.304978451Z" level=info msg="CreateContainer within sandbox \"19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6\"" Jul 2 00:22:03.305799 containerd[1449]: time="2024-07-02T00:22:03.305541518Z" level=info msg="StartContainer for \"ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6\"" Jul 2 00:22:03.347351 systemd[1]: Started cri-containerd-ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6.scope - libcontainer container ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6. Jul 2 00:22:03.383945 containerd[1449]: time="2024-07-02T00:22:03.383901646Z" level=info msg="StartContainer for \"ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6\" returns successfully" Jul 2 00:22:03.399598 systemd[1]: cri-containerd-ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6.scope: Deactivated successfully. Jul 2 00:22:03.480660 containerd[1449]: time="2024-07-02T00:22:03.480591525Z" level=info msg="shim disconnected" id=ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6 namespace=k8s.io Jul 2 00:22:03.480660 containerd[1449]: time="2024-07-02T00:22:03.480647711Z" level=warning msg="cleaning up after shim disconnected" id=ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6 namespace=k8s.io Jul 2 00:22:03.480660 containerd[1449]: time="2024-07-02T00:22:03.480656318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:03.948993 kubelet[2577]: E0702 00:22:03.948951 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:04.300659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab43df1bcc57ea8f4591d56e9400ef511e741887a4be35d1dd97ce42263e1da6-rootfs.mount: Deactivated successfully. Jul 2 00:22:04.902241 kubelet[2577]: E0702 00:22:04.902199 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:06.530979 containerd[1449]: time="2024-07-02T00:22:06.530896038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:06.532425 containerd[1449]: time="2024-07-02T00:22:06.532365441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 00:22:06.533959 containerd[1449]: time="2024-07-02T00:22:06.533909387Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:06.537875 containerd[1449]: time="2024-07-02T00:22:06.537787976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:06.538835 containerd[1449]: time="2024-07-02T00:22:06.538776689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.255874037s" Jul 2 00:22:06.538835 containerd[1449]: time="2024-07-02T00:22:06.538809882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 00:22:06.540973 containerd[1449]: time="2024-07-02T00:22:06.540747363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:22:06.558767 containerd[1449]: time="2024-07-02T00:22:06.558676596Z" level=info msg="CreateContainer within sandbox \"fb86bc102a5564407100e2a9c55ef41fd2a173d4cf40b6fc727142e084fa352e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:22:06.869863 containerd[1449]: time="2024-07-02T00:22:06.869683944Z" level=info msg="CreateContainer within sandbox \"fb86bc102a5564407100e2a9c55ef41fd2a173d4cf40b6fc727142e084fa352e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2499e24ee83091929099d89140b6a9d23344de433bfc7692c9dfeafcb05c3a99\"" Jul 2 00:22:06.872194 containerd[1449]: time="2024-07-02T00:22:06.871164760Z" level=info msg="StartContainer for \"2499e24ee83091929099d89140b6a9d23344de433bfc7692c9dfeafcb05c3a99\"" Jul 2 00:22:06.900849 kubelet[2577]: E0702 00:22:06.900789 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:06.919382 systemd[1]: Started cri-containerd-2499e24ee83091929099d89140b6a9d23344de433bfc7692c9dfeafcb05c3a99.scope - libcontainer container 2499e24ee83091929099d89140b6a9d23344de433bfc7692c9dfeafcb05c3a99. Jul 2 00:22:06.981233 containerd[1449]: time="2024-07-02T00:22:06.981153527Z" level=info msg="StartContainer for \"2499e24ee83091929099d89140b6a9d23344de433bfc7692c9dfeafcb05c3a99\" returns successfully" Jul 2 00:22:07.996474 kubelet[2577]: E0702 00:22:07.994160 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:08.038778 kubelet[2577]: I0702 00:22:08.038737 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6445676fdc-dsxt4" podStartSLOduration=2.955677655 podCreationTimestamp="2024-07-02 00:22:00 +0000 UTC" firstStartedPulling="2024-07-02 00:22:01.456837144 +0000 UTC m=+21.642445143" lastFinishedPulling="2024-07-02 00:22:06.539849322 +0000 UTC m=+26.725457321" observedRunningTime="2024-07-02 00:22:08.03449035 +0000 UTC m=+28.220098359" watchObservedRunningTime="2024-07-02 00:22:08.038689833 +0000 UTC m=+28.224297832" Jul 2 00:22:08.900434 kubelet[2577]: E0702 00:22:08.900293 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:08.997772 kubelet[2577]: I0702 00:22:08.996653 2577 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:22:08.997772 kubelet[2577]: E0702 00:22:08.997394 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:09.998352 kubelet[2577]: E0702 00:22:09.997916 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:10.900521 kubelet[2577]: E0702 00:22:10.900482 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:10.999769 kubelet[2577]: E0702 00:22:10.999730 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:12.900451 kubelet[2577]: E0702 00:22:12.900384 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:14.900920 kubelet[2577]: E0702 00:22:14.900854 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:16.900133 kubelet[2577]: E0702 00:22:16.900062 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:17.789141 containerd[1449]: time="2024-07-02T00:22:17.789041971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:17.790714 containerd[1449]: time="2024-07-02T00:22:17.790643796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:22:17.797248 containerd[1449]: time="2024-07-02T00:22:17.797162531Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:17.800292 containerd[1449]: time="2024-07-02T00:22:17.800249902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:17.801252 containerd[1449]: time="2024-07-02T00:22:17.801200119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 11.260398924s" Jul 2 00:22:17.801353 containerd[1449]: time="2024-07-02T00:22:17.801255775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:22:17.803386 containerd[1449]: time="2024-07-02T00:22:17.803336091Z" level=info msg="CreateContainer within sandbox \"19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:22:17.825640 containerd[1449]: time="2024-07-02T00:22:17.825559378Z" level=info msg="CreateContainer within sandbox \"19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0\"" Jul 2 00:22:17.826187 containerd[1449]: time="2024-07-02T00:22:17.826152747Z" level=info msg="StartContainer for \"2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0\"" Jul 2 00:22:17.869294 systemd[1]: Started cri-containerd-2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0.scope - libcontainer container 2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0. Jul 2 00:22:17.917241 containerd[1449]: time="2024-07-02T00:22:17.917072480Z" level=info msg="StartContainer for \"2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0\" returns successfully" Jul 2 00:22:18.014564 kubelet[2577]: E0702 00:22:18.014498 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:18.900589 kubelet[2577]: E0702 00:22:18.900520 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:19.015463 kubelet[2577]: E0702 00:22:19.015424 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:19.255771 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:56226.service - OpenSSH per-connection server daemon (10.0.0.1:56226). Jul 2 00:22:20.596383 sshd[3287]: Accepted publickey for core from 10.0.0.1 port 56226 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:20.598382 sshd[3287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:20.900753 kubelet[2577]: E0702 00:22:20.900606 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:21.087406 systemd-logind[1431]: New session 8 of user core. Jul 2 00:22:21.094440 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:22:22.268571 sshd[3287]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:22.271919 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:56226.service: Deactivated successfully. Jul 2 00:22:22.274458 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:22:22.276398 systemd-logind[1431]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:22:22.277567 systemd-logind[1431]: Removed session 8. Jul 2 00:22:22.900540 kubelet[2577]: E0702 00:22:22.900492 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:23.843539 containerd[1449]: time="2024-07-02T00:22:23.843434207Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:22:23.847327 systemd[1]: cri-containerd-2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0.scope: Deactivated successfully. Jul 2 00:22:23.873991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0-rootfs.mount: Deactivated successfully. Jul 2 00:22:23.895959 containerd[1449]: time="2024-07-02T00:22:23.895754458Z" level=info msg="shim disconnected" id=2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0 namespace=k8s.io Jul 2 00:22:23.895959 containerd[1449]: time="2024-07-02T00:22:23.895861051Z" level=warning msg="cleaning up after shim disconnected" id=2f0ab9b1a2288d041375172a5b0bf92ff48f30793cae57d9f0007dabff3fdfd0 namespace=k8s.io Jul 2 00:22:23.895959 containerd[1449]: time="2024-07-02T00:22:23.895873855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:23.907581 kubelet[2577]: I0702 00:22:23.907532 2577 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:22:23.927996 kubelet[2577]: I0702 00:22:23.927951 2577 topology_manager.go:215] "Topology Admit Handler" podUID="9761039f-dc00-496a-bcc0-b4763a123012" podNamespace="kube-system" podName="coredns-5dd5756b68-fp5cw" Jul 2 00:22:23.932997 kubelet[2577]: I0702 00:22:23.932946 2577 topology_manager.go:215] "Topology Admit Handler" podUID="bad383c5-33ee-4ea7-a464-f6479e4f0591" podNamespace="kube-system" podName="coredns-5dd5756b68-xnbjp" Jul 2 00:22:23.938574 kubelet[2577]: I0702 00:22:23.938516 2577 topology_manager.go:215] "Topology Admit Handler" podUID="d2f026c7-909c-46f0-8429-002f30b8f45f" podNamespace="calico-system" podName="calico-kube-controllers-7889d5969c-8bv75" Jul 2 00:22:23.944320 systemd[1]: Created slice kubepods-burstable-pod9761039f_dc00_496a_bcc0_b4763a123012.slice - libcontainer container kubepods-burstable-pod9761039f_dc00_496a_bcc0_b4763a123012.slice. Jul 2 00:22:23.952941 systemd[1]: Created slice kubepods-burstable-podbad383c5_33ee_4ea7_a464_f6479e4f0591.slice - libcontainer container kubepods-burstable-podbad383c5_33ee_4ea7_a464_f6479e4f0591.slice. Jul 2 00:22:23.960777 systemd[1]: Created slice kubepods-besteffort-podd2f026c7_909c_46f0_8429_002f30b8f45f.slice - libcontainer container kubepods-besteffort-podd2f026c7_909c_46f0_8429_002f30b8f45f.slice. Jul 2 00:22:24.029302 kubelet[2577]: E0702 00:22:24.029262 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:24.030052 containerd[1449]: time="2024-07-02T00:22:24.029991565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:22:24.038487 kubelet[2577]: I0702 00:22:24.038437 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bad383c5-33ee-4ea7-a464-f6479e4f0591-config-volume\") pod \"coredns-5dd5756b68-xnbjp\" (UID: \"bad383c5-33ee-4ea7-a464-f6479e4f0591\") " pod="kube-system/coredns-5dd5756b68-xnbjp" Jul 2 00:22:24.038487 kubelet[2577]: I0702 00:22:24.038485 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qn9\" (UniqueName: \"kubernetes.io/projected/bad383c5-33ee-4ea7-a464-f6479e4f0591-kube-api-access-74qn9\") pod \"coredns-5dd5756b68-xnbjp\" (UID: \"bad383c5-33ee-4ea7-a464-f6479e4f0591\") " pod="kube-system/coredns-5dd5756b68-xnbjp" Jul 2 00:22:24.038778 kubelet[2577]: I0702 00:22:24.038523 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwc9\" (UniqueName: \"kubernetes.io/projected/d2f026c7-909c-46f0-8429-002f30b8f45f-kube-api-access-wfwc9\") pod \"calico-kube-controllers-7889d5969c-8bv75\" (UID: \"d2f026c7-909c-46f0-8429-002f30b8f45f\") " pod="calico-system/calico-kube-controllers-7889d5969c-8bv75" Jul 2 00:22:24.038778 kubelet[2577]: I0702 00:22:24.038555 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9761039f-dc00-496a-bcc0-b4763a123012-config-volume\") pod \"coredns-5dd5756b68-fp5cw\" (UID: \"9761039f-dc00-496a-bcc0-b4763a123012\") " pod="kube-system/coredns-5dd5756b68-fp5cw" Jul 2 00:22:24.038778 kubelet[2577]: I0702 00:22:24.038606 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2f026c7-909c-46f0-8429-002f30b8f45f-tigera-ca-bundle\") pod \"calico-kube-controllers-7889d5969c-8bv75\" (UID: \"d2f026c7-909c-46f0-8429-002f30b8f45f\") " pod="calico-system/calico-kube-controllers-7889d5969c-8bv75" Jul 2 00:22:24.038778 kubelet[2577]: I0702 00:22:24.038648 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65cbh\" (UniqueName: \"kubernetes.io/projected/9761039f-dc00-496a-bcc0-b4763a123012-kube-api-access-65cbh\") pod \"coredns-5dd5756b68-fp5cw\" (UID: \"9761039f-dc00-496a-bcc0-b4763a123012\") " pod="kube-system/coredns-5dd5756b68-fp5cw" Jul 2 00:22:24.249400 kubelet[2577]: E0702 00:22:24.248556 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:24.249607 containerd[1449]: time="2024-07-02T00:22:24.249528314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fp5cw,Uid:9761039f-dc00-496a-bcc0-b4763a123012,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:24.257185 kubelet[2577]: E0702 00:22:24.256676 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:24.259250 containerd[1449]: time="2024-07-02T00:22:24.258826404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xnbjp,Uid:bad383c5-33ee-4ea7-a464-f6479e4f0591,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:24.266251 containerd[1449]: time="2024-07-02T00:22:24.266178097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7889d5969c-8bv75,Uid:d2f026c7-909c-46f0-8429-002f30b8f45f,Namespace:calico-system,Attempt:0,}" Jul 2 00:22:24.908317 systemd[1]: Created slice kubepods-besteffort-podbb1123fc_5807_4f3a_afae_4341c695ae1f.slice - libcontainer container kubepods-besteffort-podbb1123fc_5807_4f3a_afae_4341c695ae1f.slice. Jul 2 00:22:24.911231 containerd[1449]: time="2024-07-02T00:22:24.911164496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c6f8d,Uid:bb1123fc-5807-4f3a-afae-4341c695ae1f,Namespace:calico-system,Attempt:0,}" Jul 2 00:22:25.723883 containerd[1449]: time="2024-07-02T00:22:25.723756612Z" level=error msg="Failed to destroy network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:25.724477 containerd[1449]: time="2024-07-02T00:22:25.724425326Z" level=error msg="encountered an error cleaning up failed sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:25.724541 containerd[1449]: time="2024-07-02T00:22:25.724499256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fp5cw,Uid:9761039f-dc00-496a-bcc0-b4763a123012,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:25.724905 kubelet[2577]: E0702 00:22:25.724856 2577 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:25.725502 kubelet[2577]: E0702 00:22:25.724944 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fp5cw" Jul 2 00:22:25.725502 kubelet[2577]: E0702 00:22:25.724974 2577 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fp5cw" Jul 2 00:22:25.725502 kubelet[2577]: E0702 00:22:25.725057 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fp5cw_kube-system(9761039f-dc00-496a-bcc0-b4763a123012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fp5cw_kube-system(9761039f-dc00-496a-bcc0-b4763a123012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fp5cw" podUID="9761039f-dc00-496a-bcc0-b4763a123012" Jul 2 00:22:25.726987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56-shm.mount: Deactivated successfully. Jul 2 00:22:26.011500 containerd[1449]: time="2024-07-02T00:22:26.011331242Z" level=error msg="Failed to destroy network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.011993 containerd[1449]: time="2024-07-02T00:22:26.011808311Z" level=error msg="encountered an error cleaning up failed sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.011993 containerd[1449]: time="2024-07-02T00:22:26.011867604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xnbjp,Uid:bad383c5-33ee-4ea7-a464-f6479e4f0591,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.012187 kubelet[2577]: E0702 00:22:26.012160 2577 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.012239 kubelet[2577]: E0702 00:22:26.012219 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xnbjp" Jul 2 00:22:26.012283 kubelet[2577]: E0702 00:22:26.012241 2577 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xnbjp" Jul 2 00:22:26.012322 kubelet[2577]: E0702 00:22:26.012297 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-xnbjp_kube-system(bad383c5-33ee-4ea7-a464-f6479e4f0591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-xnbjp_kube-system(bad383c5-33ee-4ea7-a464-f6479e4f0591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xnbjp" podUID="bad383c5-33ee-4ea7-a464-f6479e4f0591" Jul 2 00:22:26.032898 kubelet[2577]: I0702 00:22:26.032870 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:22:26.033681 containerd[1449]: time="2024-07-02T00:22:26.033642696Z" level=info msg="StopPodSandbox for \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\"" Jul 2 00:22:26.033957 containerd[1449]: time="2024-07-02T00:22:26.033935003Z" level=info msg="Ensure that sandbox f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56 in task-service has been cleanup successfully" Jul 2 00:22:26.034385 kubelet[2577]: I0702 00:22:26.034354 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:22:26.035037 containerd[1449]: time="2024-07-02T00:22:26.034937242Z" level=info msg="StopPodSandbox for \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\"" Jul 2 00:22:26.035302 containerd[1449]: time="2024-07-02T00:22:26.035270477Z" level=info msg="Ensure that sandbox 478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8 in task-service has been cleanup successfully" Jul 2 00:22:26.078052 containerd[1449]: time="2024-07-02T00:22:26.077941295Z" level=error msg="StopPodSandbox for \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\" failed" error="failed to destroy network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.078582 kubelet[2577]: E0702 00:22:26.078557 2577 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:22:26.078694 kubelet[2577]: E0702 00:22:26.078657 2577 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8"} Jul 2 00:22:26.078743 kubelet[2577]: E0702 00:22:26.078709 2577 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bad383c5-33ee-4ea7-a464-f6479e4f0591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:22:26.078827 kubelet[2577]: E0702 00:22:26.078748 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bad383c5-33ee-4ea7-a464-f6479e4f0591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xnbjp" podUID="bad383c5-33ee-4ea7-a464-f6479e4f0591" Jul 2 00:22:26.079077 containerd[1449]: time="2024-07-02T00:22:26.078992949Z" level=error msg="StopPodSandbox for \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\" failed" error="failed to destroy network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.079261 kubelet[2577]: E0702 00:22:26.079184 2577 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:22:26.079261 kubelet[2577]: E0702 00:22:26.079205 2577 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56"} Jul 2 00:22:26.079261 kubelet[2577]: E0702 00:22:26.079240 2577 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9761039f-dc00-496a-bcc0-b4763a123012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:22:26.079403 kubelet[2577]: E0702 00:22:26.079290 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9761039f-dc00-496a-bcc0-b4763a123012\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fp5cw" podUID="9761039f-dc00-496a-bcc0-b4763a123012" Jul 2 00:22:26.093195 containerd[1449]: time="2024-07-02T00:22:26.093096854Z" level=error msg="Failed to destroy network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.093619 containerd[1449]: time="2024-07-02T00:22:26.093583211Z" level=error msg="encountered an error cleaning up failed sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.093681 containerd[1449]: time="2024-07-02T00:22:26.093645279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7889d5969c-8bv75,Uid:d2f026c7-909c-46f0-8429-002f30b8f45f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.093921 kubelet[2577]: E0702 00:22:26.093898 2577 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.093997 kubelet[2577]: E0702 00:22:26.093952 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7889d5969c-8bv75" Jul 2 00:22:26.093997 kubelet[2577]: E0702 00:22:26.093979 2577 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7889d5969c-8bv75" Jul 2 00:22:26.094089 kubelet[2577]: E0702 00:22:26.094042 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7889d5969c-8bv75_calico-system(d2f026c7-909c-46f0-8429-002f30b8f45f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7889d5969c-8bv75_calico-system(d2f026c7-909c-46f0-8429-002f30b8f45f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7889d5969c-8bv75" podUID="d2f026c7-909c-46f0-8429-002f30b8f45f" Jul 2 00:22:26.417190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8-shm.mount: Deactivated successfully. Jul 2 00:22:26.435378 containerd[1449]: time="2024-07-02T00:22:26.435310726Z" level=error msg="Failed to destroy network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.435829 containerd[1449]: time="2024-07-02T00:22:26.435790580Z" level=error msg="encountered an error cleaning up failed sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.435895 containerd[1449]: time="2024-07-02T00:22:26.435853841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c6f8d,Uid:bb1123fc-5807-4f3a-afae-4341c695ae1f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.436231 kubelet[2577]: E0702 00:22:26.436201 2577 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:26.436328 kubelet[2577]: E0702 00:22:26.436274 2577 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c6f8d" Jul 2 00:22:26.436328 kubelet[2577]: E0702 00:22:26.436302 2577 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c6f8d" Jul 2 00:22:26.436405 kubelet[2577]: E0702 00:22:26.436368 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c6f8d_calico-system(bb1123fc-5807-4f3a-afae-4341c695ae1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c6f8d_calico-system(bb1123fc-5807-4f3a-afae-4341c695ae1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:26.438497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9-shm.mount: Deactivated successfully. Jul 2 00:22:27.037516 kubelet[2577]: I0702 00:22:27.037474 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:22:27.038865 containerd[1449]: time="2024-07-02T00:22:27.038234158Z" level=info msg="StopPodSandbox for \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\"" Jul 2 00:22:27.038865 containerd[1449]: time="2024-07-02T00:22:27.038504894Z" level=info msg="Ensure that sandbox 4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9 in task-service has been cleanup successfully" Jul 2 00:22:27.042448 kubelet[2577]: I0702 00:22:27.042407 2577 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:22:27.043218 containerd[1449]: time="2024-07-02T00:22:27.043168813Z" level=info msg="StopPodSandbox for \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\"" Jul 2 00:22:27.043411 containerd[1449]: time="2024-07-02T00:22:27.043391127Z" level=info msg="Ensure that sandbox 1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c in task-service has been cleanup successfully" Jul 2 00:22:27.072993 containerd[1449]: time="2024-07-02T00:22:27.072911467Z" level=error msg="StopPodSandbox for \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\" failed" error="failed to destroy network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:27.073283 kubelet[2577]: E0702 00:22:27.073257 2577 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:22:27.073360 kubelet[2577]: E0702 00:22:27.073315 2577 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9"} Jul 2 00:22:27.073389 kubelet[2577]: E0702 00:22:27.073362 2577 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb1123fc-5807-4f3a-afae-4341c695ae1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:22:27.073450 kubelet[2577]: E0702 00:22:27.073403 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb1123fc-5807-4f3a-afae-4341c695ae1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c6f8d" podUID="bb1123fc-5807-4f3a-afae-4341c695ae1f" Jul 2 00:22:27.076877 containerd[1449]: time="2024-07-02T00:22:27.076821189Z" level=error msg="StopPodSandbox for \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\" failed" error="failed to destroy network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:27.077048 kubelet[2577]: E0702 00:22:27.077026 2577 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:22:27.077176 kubelet[2577]: E0702 00:22:27.077072 2577 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c"} Jul 2 00:22:27.077225 kubelet[2577]: E0702 00:22:27.077210 2577 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2f026c7-909c-46f0-8429-002f30b8f45f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:22:27.077306 kubelet[2577]: E0702 00:22:27.077256 2577 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2f026c7-909c-46f0-8429-002f30b8f45f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7889d5969c-8bv75" podUID="d2f026c7-909c-46f0-8429-002f30b8f45f" Jul 2 00:22:27.301295 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:56228.service - OpenSSH per-connection server daemon (10.0.0.1:56228). Jul 2 00:22:27.556973 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 56228 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:27.558746 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:27.563963 systemd-logind[1431]: New session 9 of user core. Jul 2 00:22:27.571398 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:22:27.696252 sshd[3573]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:27.701375 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:56228.service: Deactivated successfully. Jul 2 00:22:27.704142 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:22:27.705022 systemd-logind[1431]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:22:27.706487 systemd-logind[1431]: Removed session 9. Jul 2 00:22:32.708318 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:56110.service - OpenSSH per-connection server daemon (10.0.0.1:56110). Jul 2 00:22:32.763867 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 56110 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:32.765795 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:32.772735 systemd-logind[1431]: New session 10 of user core. Jul 2 00:22:32.779434 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:22:32.937269 sshd[3592]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:32.941435 systemd-logind[1431]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:22:32.942198 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:56110.service: Deactivated successfully. Jul 2 00:22:32.945987 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:22:32.948245 systemd-logind[1431]: Removed session 10. Jul 2 00:22:32.961523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224926156.mount: Deactivated successfully. Jul 2 00:22:34.928465 containerd[1449]: time="2024-07-02T00:22:34.927139448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.936030 containerd[1449]: time="2024-07-02T00:22:34.935955140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:22:34.952819 containerd[1449]: time="2024-07-02T00:22:34.952574773Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.972824 containerd[1449]: time="2024-07-02T00:22:34.972750933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.973828 containerd[1449]: time="2024-07-02T00:22:34.973777662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 10.943739509s" Jul 2 00:22:34.973828 containerd[1449]: time="2024-07-02T00:22:34.973820473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:22:35.012277 containerd[1449]: time="2024-07-02T00:22:35.012140969Z" level=info msg="CreateContainer within sandbox \"19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:22:35.051002 containerd[1449]: time="2024-07-02T00:22:35.050935898Z" level=info msg="CreateContainer within sandbox \"19b50141876e3a64c99747d8df8ba977ac45199eaf79f80eb6524e5251c69295\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bbd1d8558c767bf5af37345ac8b1cfeeeb7800ab90cdb6fc823cc08060010583\"" Jul 2 00:22:35.051791 containerd[1449]: time="2024-07-02T00:22:35.051733380Z" level=info msg="StartContainer for \"bbd1d8558c767bf5af37345ac8b1cfeeeb7800ab90cdb6fc823cc08060010583\"" Jul 2 00:22:35.197416 systemd[1]: Started cri-containerd-bbd1d8558c767bf5af37345ac8b1cfeeeb7800ab90cdb6fc823cc08060010583.scope - libcontainer container bbd1d8558c767bf5af37345ac8b1cfeeeb7800ab90cdb6fc823cc08060010583. Jul 2 00:22:35.388223 containerd[1449]: time="2024-07-02T00:22:35.386731764Z" level=info msg="StartContainer for \"bbd1d8558c767bf5af37345ac8b1cfeeeb7800ab90cdb6fc823cc08060010583\" returns successfully" Jul 2 00:22:35.591553 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:22:35.591755 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:22:36.087777 kubelet[2577]: E0702 00:22:36.087466 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:36.901641 containerd[1449]: time="2024-07-02T00:22:36.901589561Z" level=info msg="StopPodSandbox for \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\"" Jul 2 00:22:37.094241 kubelet[2577]: E0702 00:22:37.093747 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:37.256811 kubelet[2577]: I0702 00:22:37.256489 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kbr7m" podStartSLOduration=3.7187518710000003 podCreationTimestamp="2024-07-02 00:22:00 +0000 UTC" firstStartedPulling="2024-07-02 00:22:01.44062212 +0000 UTC m=+21.626230119" lastFinishedPulling="2024-07-02 00:22:34.978305312 +0000 UTC m=+55.163913321" observedRunningTime="2024-07-02 00:22:36.189032415 +0000 UTC m=+56.374640414" watchObservedRunningTime="2024-07-02 00:22:37.256435073 +0000 UTC m=+57.442043092" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.256 [INFO][3701] k8s.go 608: Cleaning up netns ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.258 [INFO][3701] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" iface="eth0" netns="/var/run/netns/cni-419e3c8d-d0d2-73c0-8707-26cd6ff3a307" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.259 [INFO][3701] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" iface="eth0" netns="/var/run/netns/cni-419e3c8d-d0d2-73c0-8707-26cd6ff3a307" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.260 [INFO][3701] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" iface="eth0" netns="/var/run/netns/cni-419e3c8d-d0d2-73c0-8707-26cd6ff3a307" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.260 [INFO][3701] k8s.go 615: Releasing IP address(es) ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.260 [INFO][3701] utils.go 188: Calico CNI releasing IP address ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.596 [INFO][3731] ipam_plugin.go 411: Releasing address using handleID ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.596 [INFO][3731] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.596 [INFO][3731] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.661 [WARNING][3731] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.662 [INFO][3731] ipam_plugin.go 439: Releasing address using workloadID ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.677 [INFO][3731] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:37.706164 containerd[1449]: 2024-07-02 00:22:37.685 [INFO][3701] k8s.go 621: Teardown processing complete. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:22:37.706164 containerd[1449]: time="2024-07-02T00:22:37.701730304Z" level=info msg="TearDown network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\" successfully" Jul 2 00:22:37.706164 containerd[1449]: time="2024-07-02T00:22:37.701769258Z" level=info msg="StopPodSandbox for \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\" returns successfully" Jul 2 00:22:37.706164 containerd[1449]: time="2024-07-02T00:22:37.704637183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xnbjp,Uid:bad383c5-33ee-4ea7-a464-f6479e4f0591,Namespace:kube-system,Attempt:1,}" Jul 2 00:22:37.706934 kubelet[2577]: E0702 00:22:37.703977 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:37.708473 systemd[1]: run-netns-cni\x2d419e3c8d\x2dd0d2\x2d73c0\x2d8707\x2d26cd6ff3a307.mount: Deactivated successfully. Jul 2 00:22:37.947281 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:56114.service - OpenSSH per-connection server daemon (10.0.0.1:56114). Jul 2 00:22:38.051598 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 56114 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:38.054743 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:38.072711 systemd-logind[1431]: New session 11 of user core. Jul 2 00:22:38.079689 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:22:38.330938 sshd[3751]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:38.337515 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:56114.service: Deactivated successfully. Jul 2 00:22:38.340646 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:22:38.342224 systemd-logind[1431]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:22:38.343819 systemd-logind[1431]: Removed session 11. Jul 2 00:22:39.423801 systemd-networkd[1385]: vxlan.calico: Link UP Jul 2 00:22:39.423815 systemd-networkd[1385]: vxlan.calico: Gained carrier Jul 2 00:22:40.901960 containerd[1449]: time="2024-07-02T00:22:40.901834280Z" level=info msg="StopPodSandbox for \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\"" Jul 2 00:22:40.901960 containerd[1449]: time="2024-07-02T00:22:40.901916397Z" level=info msg="StopPodSandbox for \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\"" Jul 2 00:22:41.389614 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.674 [INFO][4006] k8s.go 608: Cleaning up netns ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.674 [INFO][4006] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" iface="eth0" netns="/var/run/netns/cni-ca55f101-fca2-75e8-d7a4-119de9de4e7c" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.674 [INFO][4006] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" iface="eth0" netns="/var/run/netns/cni-ca55f101-fca2-75e8-d7a4-119de9de4e7c" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.675 [INFO][4006] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" iface="eth0" netns="/var/run/netns/cni-ca55f101-fca2-75e8-d7a4-119de9de4e7c" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.675 [INFO][4006] k8s.go 615: Releasing IP address(es) ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.675 [INFO][4006] utils.go 188: Calico CNI releasing IP address ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.708 [INFO][4029] ipam_plugin.go 411: Releasing address using handleID ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.709 [INFO][4029] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:41.709 [INFO][4029] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:42.080 [WARNING][4029] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:42.080 [INFO][4029] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:42.083 [INFO][4029] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:42.088968 containerd[1449]: 2024-07-02 00:22:42.086 [INFO][4006] k8s.go 621: Teardown processing complete. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:22:42.093403 containerd[1449]: time="2024-07-02T00:22:42.089278822Z" level=info msg="TearDown network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\" successfully" Jul 2 00:22:42.093403 containerd[1449]: time="2024-07-02T00:22:42.089341493Z" level=info msg="StopPodSandbox for \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\" returns successfully" Jul 2 00:22:42.093403 containerd[1449]: time="2024-07-02T00:22:42.092425032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c6f8d,Uid:bb1123fc-5807-4f3a-afae-4341c695ae1f,Namespace:calico-system,Attempt:1,}" Jul 2 00:22:42.093965 systemd[1]: run-netns-cni\x2dca55f101\x2dfca2\x2d75e8\x2dd7a4\x2d119de9de4e7c.mount: Deactivated successfully. Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.080 [INFO][4001] k8s.go 608: Cleaning up netns ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.080 [INFO][4001] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" iface="eth0" netns="/var/run/netns/cni-6f1d1996-1e95-b726-5357-dee9d8d3f9fc" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.082 [INFO][4001] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" iface="eth0" netns="/var/run/netns/cni-6f1d1996-1e95-b726-5357-dee9d8d3f9fc" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.082 [INFO][4001] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" iface="eth0" netns="/var/run/netns/cni-6f1d1996-1e95-b726-5357-dee9d8d3f9fc" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.082 [INFO][4001] k8s.go 615: Releasing IP address(es) ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.083 [INFO][4001] utils.go 188: Calico CNI releasing IP address ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.107 [INFO][4036] ipam_plugin.go 411: Releasing address using handleID ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.108 [INFO][4036] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.108 [INFO][4036] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.131 [WARNING][4036] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.132 [INFO][4036] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.135 [INFO][4036] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:42.140204 containerd[1449]: 2024-07-02 00:22:42.137 [INFO][4001] k8s.go 621: Teardown processing complete. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:22:42.140744 containerd[1449]: time="2024-07-02T00:22:42.140459141Z" level=info msg="TearDown network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\" successfully" Jul 2 00:22:42.140744 containerd[1449]: time="2024-07-02T00:22:42.140500280Z" level=info msg="StopPodSandbox for \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\" returns successfully" Jul 2 00:22:42.140892 kubelet[2577]: E0702 00:22:42.140859 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:42.144165 containerd[1449]: time="2024-07-02T00:22:42.143498406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fp5cw,Uid:9761039f-dc00-496a-bcc0-b4763a123012,Namespace:kube-system,Attempt:1,}" Jul 2 00:22:42.144480 systemd[1]: run-netns-cni\x2d6f1d1996\x2d1e95\x2db726\x2d5357\x2ddee9d8d3f9fc.mount: Deactivated successfully. Jul 2 00:22:42.589791 systemd-networkd[1385]: calia24864ddaa5: Link UP Jul 2 00:22:42.593439 systemd-networkd[1385]: calia24864ddaa5: Gained carrier Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:41.665 [INFO][4015] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--xnbjp-eth0 coredns-5dd5756b68- kube-system bad383c5-33ee-4ea7-a464-f6479e4f0591 850 0 2024-07-02 00:21:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-xnbjp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia24864ddaa5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:41.665 [INFO][4015] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.114 [INFO][4042] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" HandleID="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.144 [INFO][4042] ipam_plugin.go 264: Auto assigning IP ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" HandleID="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000129900), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-xnbjp", "timestamp":"2024-07-02 00:22:42.114478531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.144 [INFO][4042] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.144 [INFO][4042] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.144 [INFO][4042] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.155 [INFO][4042] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.274 [INFO][4042] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.326 [INFO][4042] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.330 [INFO][4042] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.337 [INFO][4042] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.337 [INFO][4042] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.351 [INFO][4042] ipam.go 1685: Creating new handle: k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.577 [INFO][4042] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.582 [INFO][4042] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.582 [INFO][4042] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" host="localhost" Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.582 [INFO][4042] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:42.638976 containerd[1449]: 2024-07-02 00:22:42.582 [INFO][4042] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" HandleID="k8s-pod-network.11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:42.640148 containerd[1449]: 2024-07-02 00:22:42.586 [INFO][4015] k8s.go 386: Populated endpoint ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xnbjp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bad383c5-33ee-4ea7-a464-f6479e4f0591", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-xnbjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24864ddaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:42.640148 containerd[1449]: 2024-07-02 00:22:42.586 [INFO][4015] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:42.640148 containerd[1449]: 2024-07-02 00:22:42.587 [INFO][4015] dataplane_linux.go 68: Setting the host side veth name to calia24864ddaa5 ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:42.640148 containerd[1449]: 2024-07-02 00:22:42.592 [INFO][4015] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:42.640148 containerd[1449]: 2024-07-02 00:22:42.593 [INFO][4015] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xnbjp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bad383c5-33ee-4ea7-a464-f6479e4f0591", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca", Pod:"coredns-5dd5756b68-xnbjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24864ddaa5", MAC:"26:d1:16:d6:a6:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:42.640148 containerd[1449]: 2024-07-02 00:22:42.635 [INFO][4015] k8s.go 500: Wrote updated endpoint to datastore ContainerID="11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca" Namespace="kube-system" Pod="coredns-5dd5756b68-xnbjp" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:22:42.901941 containerd[1449]: time="2024-07-02T00:22:42.901766927Z" level=info msg="StopPodSandbox for \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\"" Jul 2 00:22:43.065724 containerd[1449]: time="2024-07-02T00:22:43.065592112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:43.065724 containerd[1449]: time="2024-07-02T00:22:43.065691973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:43.065724 containerd[1449]: time="2024-07-02T00:22:43.065713585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:43.086445 containerd[1449]: time="2024-07-02T00:22:43.065727461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:43.096281 systemd[1]: Started cri-containerd-11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca.scope - libcontainer container 11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca. Jul 2 00:22:43.110364 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:22:43.327952 containerd[1449]: time="2024-07-02T00:22:43.327878190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xnbjp,Uid:bad383c5-33ee-4ea7-a464-f6479e4f0591,Namespace:kube-system,Attempt:1,} returns sandbox id \"11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca\"" Jul 2 00:22:43.328802 kubelet[2577]: E0702 00:22:43.328781 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:43.330932 containerd[1449]: time="2024-07-02T00:22:43.330751780Z" level=info msg="CreateContainer within sandbox \"11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:22:43.344314 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:53134.service - OpenSSH per-connection server daemon (10.0.0.1:53134). Jul 2 00:22:43.431967 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 53134 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:43.433903 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:43.454386 systemd-logind[1431]: New session 12 of user core. Jul 2 00:22:43.460394 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.487 [INFO][4082] k8s.go 608: Cleaning up netns ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.488 [INFO][4082] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" iface="eth0" netns="/var/run/netns/cni-7ae857bb-d133-34cf-0ecd-6a65ec2447fc" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.488 [INFO][4082] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" iface="eth0" netns="/var/run/netns/cni-7ae857bb-d133-34cf-0ecd-6a65ec2447fc" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.488 [INFO][4082] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" iface="eth0" netns="/var/run/netns/cni-7ae857bb-d133-34cf-0ecd-6a65ec2447fc" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.488 [INFO][4082] k8s.go 615: Releasing IP address(es) ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.488 [INFO][4082] utils.go 188: Calico CNI releasing IP address ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.517 [INFO][4149] ipam_plugin.go 411: Releasing address using handleID ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.517 [INFO][4149] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.517 [INFO][4149] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.729 [WARNING][4149] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.729 [INFO][4149] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.731 [INFO][4149] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:43.735726 containerd[1449]: 2024-07-02 00:22:43.733 [INFO][4082] k8s.go 621: Teardown processing complete. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:22:43.736542 containerd[1449]: time="2024-07-02T00:22:43.736057876Z" level=info msg="TearDown network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\" successfully" Jul 2 00:22:43.736542 containerd[1449]: time="2024-07-02T00:22:43.736144101Z" level=info msg="StopPodSandbox for \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\" returns successfully" Jul 2 00:22:43.736930 containerd[1449]: time="2024-07-02T00:22:43.736875809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7889d5969c-8bv75,Uid:d2f026c7-909c-46f0-8429-002f30b8f45f,Namespace:calico-system,Attempt:1,}" Jul 2 00:22:43.793331 sshd[4131]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:43.798710 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:53134.service: Deactivated successfully. Jul 2 00:22:43.801479 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:22:43.802419 systemd-logind[1431]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:22:43.803682 systemd-logind[1431]: Removed session 12. Jul 2 00:22:43.821353 systemd-networkd[1385]: calia24864ddaa5: Gained IPv6LL Jul 2 00:22:43.993063 systemd[1]: run-netns-cni\x2d7ae857bb\x2dd133\x2d34cf\x2d0ecd\x2d6a65ec2447fc.mount: Deactivated successfully. Jul 2 00:22:44.064158 systemd-networkd[1385]: cali1b5aeb09e27: Link UP Jul 2 00:22:44.064893 systemd-networkd[1385]: cali1b5aeb09e27: Gained carrier Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.729 [INFO][4133] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c6f8d-eth0 csi-node-driver- calico-system bb1123fc-5807-4f3a-afae-4341c695ae1f 868 0 2024-07-02 00:22:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-c6f8d eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali1b5aeb09e27 [] []}} ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.729 [INFO][4133] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.795 [INFO][4167] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" HandleID="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.804 [INFO][4167] ipam_plugin.go 264: Auto assigning IP ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" HandleID="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c6f8d", "timestamp":"2024-07-02 00:22:43.795344102 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.804 [INFO][4167] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.804 [INFO][4167] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.804 [INFO][4167] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:43.806 [INFO][4167] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.006 [INFO][4167] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.011 [INFO][4167] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.014 [INFO][4167] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.016 [INFO][4167] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.016 [INFO][4167] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.018 [INFO][4167] ipam.go 1685: Creating new handle: k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3 Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.022 [INFO][4167] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.059 [INFO][4167] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.059 [INFO][4167] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" host="localhost" Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.059 [INFO][4167] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:44.594643 containerd[1449]: 2024-07-02 00:22:44.059 [INFO][4167] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" HandleID="k8s-pod-network.a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:44.595847 containerd[1449]: 2024-07-02 00:22:44.062 [INFO][4133] k8s.go 386: Populated endpoint ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c6f8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb1123fc-5807-4f3a-afae-4341c695ae1f", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c6f8d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b5aeb09e27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:44.595847 containerd[1449]: 2024-07-02 00:22:44.062 [INFO][4133] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:44.595847 containerd[1449]: 2024-07-02 00:22:44.062 [INFO][4133] dataplane_linux.go 68: Setting the host side veth name to cali1b5aeb09e27 ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:44.595847 containerd[1449]: 2024-07-02 00:22:44.064 [INFO][4133] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:44.595847 containerd[1449]: 2024-07-02 00:22:44.065 [INFO][4133] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c6f8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb1123fc-5807-4f3a-afae-4341c695ae1f", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3", Pod:"csi-node-driver-c6f8d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b5aeb09e27", MAC:"12:6b:e6:87:79:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:44.595847 containerd[1449]: 2024-07-02 00:22:44.591 [INFO][4133] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3" Namespace="calico-system" Pod="csi-node-driver-c6f8d" WorkloadEndpoint="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:22:45.037807 containerd[1449]: time="2024-07-02T00:22:45.036984551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:45.037807 containerd[1449]: time="2024-07-02T00:22:45.037615627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:45.037807 containerd[1449]: time="2024-07-02T00:22:45.037639562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:45.037807 containerd[1449]: time="2024-07-02T00:22:45.037655153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:45.058255 systemd[1]: Started cri-containerd-a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3.scope - libcontainer container a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3. Jul 2 00:22:45.070433 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:22:45.082390 containerd[1449]: time="2024-07-02T00:22:45.082345613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c6f8d,Uid:bb1123fc-5807-4f3a-afae-4341c695ae1f,Namespace:calico-system,Attempt:1,} returns sandbox id \"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3\"" Jul 2 00:22:45.084004 containerd[1449]: time="2024-07-02T00:22:45.083980377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:22:45.101289 systemd-networkd[1385]: cali1b5aeb09e27: Gained IPv6LL Jul 2 00:22:45.678479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593869419.mount: Deactivated successfully. Jul 2 00:22:45.890916 systemd-networkd[1385]: cali58c7f6b7a18: Link UP Jul 2 00:22:45.891650 systemd-networkd[1385]: cali58c7f6b7a18: Gained carrier Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.504 [INFO][4179] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--fp5cw-eth0 coredns-5dd5756b68- kube-system 9761039f-dc00-496a-bcc0-b4763a123012 869 0 2024-07-02 00:21:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-fp5cw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali58c7f6b7a18 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.504 [INFO][4179] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.588 [INFO][4252] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" HandleID="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.857 [INFO][4252] ipam_plugin.go 264: Auto assigning IP ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" HandleID="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000287700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-fp5cw", "timestamp":"2024-07-02 00:22:45.588472627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.857 [INFO][4252] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.857 [INFO][4252] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.857 [INFO][4252] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.859 [INFO][4252] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.862 [INFO][4252] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.866 [INFO][4252] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.867 [INFO][4252] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.868 [INFO][4252] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.868 [INFO][4252] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.870 [INFO][4252] ipam.go 1685: Creating new handle: k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.872 [INFO][4252] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.884 [INFO][4252] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.884 [INFO][4252] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" host="localhost" Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.884 [INFO][4252] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:45.904696 containerd[1449]: 2024-07-02 00:22:45.884 [INFO][4252] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" HandleID="k8s-pod-network.42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:45.905558 containerd[1449]: 2024-07-02 00:22:45.888 [INFO][4179] k8s.go 386: Populated endpoint ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--fp5cw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9761039f-dc00-496a-bcc0-b4763a123012", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-fp5cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58c7f6b7a18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:45.905558 containerd[1449]: 2024-07-02 00:22:45.888 [INFO][4179] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:45.905558 containerd[1449]: 2024-07-02 00:22:45.888 [INFO][4179] dataplane_linux.go 68: Setting the host side veth name to cali58c7f6b7a18 ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:45.905558 containerd[1449]: 2024-07-02 00:22:45.891 [INFO][4179] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:45.905558 containerd[1449]: 2024-07-02 00:22:45.892 [INFO][4179] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--fp5cw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9761039f-dc00-496a-bcc0-b4763a123012", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc", Pod:"coredns-5dd5756b68-fp5cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58c7f6b7a18", MAC:"36:9e:8d:a9:2a:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:45.905558 containerd[1449]: 2024-07-02 00:22:45.900 [INFO][4179] k8s.go 500: Wrote updated endpoint to datastore ContainerID="42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc" Namespace="kube-system" Pod="coredns-5dd5756b68-fp5cw" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:22:46.070195 containerd[1449]: time="2024-07-02T00:22:46.070080140Z" level=info msg="CreateContainer within sandbox \"11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6f1929294417db167614a51ad86fef1747d18d856ab14065b076d673c7224b1\"" Jul 2 00:22:46.071163 containerd[1449]: time="2024-07-02T00:22:46.070746754Z" level=info msg="StartContainer for \"b6f1929294417db167614a51ad86fef1747d18d856ab14065b076d673c7224b1\"" Jul 2 00:22:46.118452 systemd[1]: Started cri-containerd-b6f1929294417db167614a51ad86fef1747d18d856ab14065b076d673c7224b1.scope - libcontainer container b6f1929294417db167614a51ad86fef1747d18d856ab14065b076d673c7224b1. Jul 2 00:22:46.176556 containerd[1449]: time="2024-07-02T00:22:46.176466979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:46.176768 containerd[1449]: time="2024-07-02T00:22:46.176565598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:46.176768 containerd[1449]: time="2024-07-02T00:22:46.176598241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:46.176768 containerd[1449]: time="2024-07-02T00:22:46.176620442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:46.199423 systemd[1]: Started cri-containerd-42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc.scope - libcontainer container 42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc. Jul 2 00:22:46.212965 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:22:46.223651 containerd[1449]: time="2024-07-02T00:22:46.223585038Z" level=info msg="StartContainer for \"b6f1929294417db167614a51ad86fef1747d18d856ab14065b076d673c7224b1\" returns successfully" Jul 2 00:22:46.239645 containerd[1449]: time="2024-07-02T00:22:46.239606507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fp5cw,Uid:9761039f-dc00-496a-bcc0-b4763a123012,Namespace:kube-system,Attempt:1,} returns sandbox id \"42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc\"" Jul 2 00:22:46.240331 kubelet[2577]: E0702 00:22:46.240309 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:46.242137 containerd[1449]: time="2024-07-02T00:22:46.242090475Z" level=info msg="CreateContainer within sandbox \"42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:22:46.602815 systemd-networkd[1385]: calic21b500afcd: Link UP Jul 2 00:22:46.603092 systemd-networkd[1385]: calic21b500afcd: Gained carrier Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.453 [INFO][4344] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0 calico-kube-controllers-7889d5969c- calico-system d2f026c7-909c-46f0-8429-002f30b8f45f 880 0 2024-07-02 00:22:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7889d5969c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7889d5969c-8bv75 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic21b500afcd [] []}} ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.453 [INFO][4344] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.483 [INFO][4359] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" HandleID="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.493 [INFO][4359] ipam_plugin.go 264: Auto assigning IP ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" HandleID="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7889d5969c-8bv75", "timestamp":"2024-07-02 00:22:46.483354353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.493 [INFO][4359] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.493 [INFO][4359] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.493 [INFO][4359] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.497 [INFO][4359] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.505 [INFO][4359] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.512 [INFO][4359] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.514 [INFO][4359] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.518 [INFO][4359] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.518 [INFO][4359] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.520 [INFO][4359] ipam.go 1685: Creating new handle: k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.525 [INFO][4359] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.596 [INFO][4359] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.596 [INFO][4359] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" host="localhost" Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.596 [INFO][4359] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:46.633547 containerd[1449]: 2024-07-02 00:22:46.596 [INFO][4359] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" HandleID="k8s-pod-network.ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:46.634410 containerd[1449]: 2024-07-02 00:22:46.598 [INFO][4344] k8s.go 386: Populated endpoint ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0", GenerateName:"calico-kube-controllers-7889d5969c-", Namespace:"calico-system", SelfLink:"", UID:"d2f026c7-909c-46f0-8429-002f30b8f45f", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7889d5969c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7889d5969c-8bv75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic21b500afcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:46.634410 containerd[1449]: 2024-07-02 00:22:46.599 [INFO][4344] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:46.634410 containerd[1449]: 2024-07-02 00:22:46.599 [INFO][4344] dataplane_linux.go 68: Setting the host side veth name to calic21b500afcd ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:46.634410 containerd[1449]: 2024-07-02 00:22:46.601 [INFO][4344] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:46.634410 containerd[1449]: 2024-07-02 00:22:46.602 [INFO][4344] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0", GenerateName:"calico-kube-controllers-7889d5969c-", Namespace:"calico-system", SelfLink:"", UID:"d2f026c7-909c-46f0-8429-002f30b8f45f", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7889d5969c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d", Pod:"calico-kube-controllers-7889d5969c-8bv75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic21b500afcd", MAC:"96:c9:75:ec:43:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:46.634410 containerd[1449]: 2024-07-02 00:22:46.629 [INFO][4344] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d" Namespace="calico-system" Pod="calico-kube-controllers-7889d5969c-8bv75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:22:46.826825 containerd[1449]: time="2024-07-02T00:22:46.826689974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:46.826825 containerd[1449]: time="2024-07-02T00:22:46.826792130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:46.826825 containerd[1449]: time="2024-07-02T00:22:46.826819342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:46.827372 containerd[1449]: time="2024-07-02T00:22:46.826839920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:46.835421 containerd[1449]: time="2024-07-02T00:22:46.835363541Z" level=info msg="CreateContainer within sandbox \"42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72a1911bcc093af56d264014c2d0a122581de7ba99fd9d1c0611fe2feaf3e9f3\"" Jul 2 00:22:46.836642 containerd[1449]: time="2024-07-02T00:22:46.836601356Z" level=info msg="StartContainer for \"72a1911bcc093af56d264014c2d0a122581de7ba99fd9d1c0611fe2feaf3e9f3\"" Jul 2 00:22:46.849454 systemd[1]: Started cri-containerd-ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d.scope - libcontainer container ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d. Jul 2 00:22:46.893098 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:22:46.911542 systemd[1]: Started cri-containerd-72a1911bcc093af56d264014c2d0a122581de7ba99fd9d1c0611fe2feaf3e9f3.scope - libcontainer container 72a1911bcc093af56d264014c2d0a122581de7ba99fd9d1c0611fe2feaf3e9f3. Jul 2 00:22:46.928024 containerd[1449]: time="2024-07-02T00:22:46.927972699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7889d5969c-8bv75,Uid:d2f026c7-909c-46f0-8429-002f30b8f45f,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d\"" Jul 2 00:22:47.115668 containerd[1449]: time="2024-07-02T00:22:47.115590215Z" level=info msg="StartContainer for \"72a1911bcc093af56d264014c2d0a122581de7ba99fd9d1c0611fe2feaf3e9f3\" returns successfully" Jul 2 00:22:47.136326 kubelet[2577]: E0702 00:22:47.136292 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:47.138771 kubelet[2577]: E0702 00:22:47.138739 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:47.406323 systemd-networkd[1385]: cali58c7f6b7a18: Gained IPv6LL Jul 2 00:22:47.761238 kubelet[2577]: I0702 00:22:47.760426 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xnbjp" podStartSLOduration=52.760375988 podCreationTimestamp="2024-07-02 00:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:22:47.347046215 +0000 UTC m=+67.532654214" watchObservedRunningTime="2024-07-02 00:22:47.760375988 +0000 UTC m=+67.945983987" Jul 2 00:22:48.143745 kubelet[2577]: E0702 00:22:48.142807 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:48.143745 kubelet[2577]: E0702 00:22:48.142823 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:48.301425 systemd-networkd[1385]: calic21b500afcd: Gained IPv6LL Jul 2 00:22:48.438182 kubelet[2577]: I0702 00:22:48.438120 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fp5cw" podStartSLOduration=53.438047531 podCreationTimestamp="2024-07-02 00:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:22:47.762077991 +0000 UTC m=+67.947685980" watchObservedRunningTime="2024-07-02 00:22:48.438047531 +0000 UTC m=+68.623655530" Jul 2 00:22:48.807833 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:50796.service - OpenSSH per-connection server daemon (10.0.0.1:50796). Jul 2 00:22:49.144769 kubelet[2577]: E0702 00:22:49.144641 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:49.145214 kubelet[2577]: E0702 00:22:49.144839 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:49.225592 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 50796 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:49.227688 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:49.232070 systemd-logind[1431]: New session 13 of user core. Jul 2 00:22:49.240241 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:22:49.901995 kubelet[2577]: E0702 00:22:49.901880 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:50.076246 sshd[4470]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:50.080610 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:50796.service: Deactivated successfully. Jul 2 00:22:50.083027 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:22:50.083748 systemd-logind[1431]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:22:50.084853 systemd-logind[1431]: Removed session 13. Jul 2 00:22:51.902715 kubelet[2577]: E0702 00:22:51.902637 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:53.108630 containerd[1449]: time="2024-07-02T00:22:53.108558124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:53.213234 containerd[1449]: time="2024-07-02T00:22:53.213120200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:22:53.450141 containerd[1449]: time="2024-07-02T00:22:53.449862297Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:53.613174 containerd[1449]: time="2024-07-02T00:22:53.613080978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:53.614020 containerd[1449]: time="2024-07-02T00:22:53.613985348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 8.529970797s" Jul 2 00:22:53.614020 containerd[1449]: time="2024-07-02T00:22:53.614022910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:22:53.614738 containerd[1449]: time="2024-07-02T00:22:53.614699503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:22:53.616265 containerd[1449]: time="2024-07-02T00:22:53.616204492Z" level=info msg="CreateContainer within sandbox \"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:22:55.089529 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:50798.service - OpenSSH per-connection server daemon (10.0.0.1:50798). Jul 2 00:22:55.130830 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 50798 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:55.132488 sshd[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:55.136652 systemd-logind[1431]: New session 14 of user core. Jul 2 00:22:55.145258 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:22:55.534539 sshd[4513]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:55.544630 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:50798.service: Deactivated successfully. Jul 2 00:22:55.547644 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:22:55.549077 systemd-logind[1431]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:22:55.558803 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:50808.service - OpenSSH per-connection server daemon (10.0.0.1:50808). Jul 2 00:22:55.560492 systemd-logind[1431]: Removed session 14. Jul 2 00:22:55.565971 containerd[1449]: time="2024-07-02T00:22:55.565909430Z" level=info msg="CreateContainer within sandbox \"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"573aded490e7c91d2b403d255d85e388c615bf8cd44f98b8ba231fadb6e3d69e\"" Jul 2 00:22:55.566898 containerd[1449]: time="2024-07-02T00:22:55.566870399Z" level=info msg="StartContainer for \"573aded490e7c91d2b403d255d85e388c615bf8cd44f98b8ba231fadb6e3d69e\"" Jul 2 00:22:55.604635 systemd[1]: run-containerd-runc-k8s.io-573aded490e7c91d2b403d255d85e388c615bf8cd44f98b8ba231fadb6e3d69e-runc.0CuHoD.mount: Deactivated successfully. Jul 2 00:22:55.618285 systemd[1]: Started cri-containerd-573aded490e7c91d2b403d255d85e388c615bf8cd44f98b8ba231fadb6e3d69e.scope - libcontainer container 573aded490e7c91d2b403d255d85e388c615bf8cd44f98b8ba231fadb6e3d69e. Jul 2 00:22:55.777560 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 50808 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:55.779977 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:55.785584 systemd-logind[1431]: New session 15 of user core. Jul 2 00:22:55.792435 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:22:55.810662 containerd[1449]: time="2024-07-02T00:22:55.810590297Z" level=info msg="StartContainer for \"573aded490e7c91d2b403d255d85e388c615bf8cd44f98b8ba231fadb6e3d69e\" returns successfully" Jul 2 00:22:56.381890 sshd[4530]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:56.396544 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:50820.service - OpenSSH per-connection server daemon (10.0.0.1:50820). Jul 2 00:22:56.419594 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:50808.service: Deactivated successfully. Jul 2 00:22:56.422170 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:22:56.423410 systemd-logind[1431]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:22:56.425522 systemd-logind[1431]: Removed session 15. Jul 2 00:22:56.471629 sshd[4575]: Accepted publickey for core from 10.0.0.1 port 50820 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:22:56.473692 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:56.480351 systemd-logind[1431]: New session 16 of user core. Jul 2 00:22:56.491348 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:22:56.731497 sshd[4575]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:56.737292 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:50820.service: Deactivated successfully. Jul 2 00:22:56.740229 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:22:56.741028 systemd-logind[1431]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:22:56.742086 systemd-logind[1431]: Removed session 16. Jul 2 00:22:58.188613 containerd[1449]: time="2024-07-02T00:22:58.188535070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:58.189958 containerd[1449]: time="2024-07-02T00:22:58.189905922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 00:22:58.192520 containerd[1449]: time="2024-07-02T00:22:58.192349708Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:58.195039 containerd[1449]: time="2024-07-02T00:22:58.194947839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:58.195595 containerd[1449]: time="2024-07-02T00:22:58.195541996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.580811653s" Jul 2 00:22:58.195595 containerd[1449]: time="2024-07-02T00:22:58.195578686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 00:22:58.196367 containerd[1449]: time="2024-07-02T00:22:58.196149748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:22:58.207729 containerd[1449]: time="2024-07-02T00:22:58.207680789Z" level=info msg="CreateContainer within sandbox \"ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:22:58.226738 containerd[1449]: time="2024-07-02T00:22:58.226655220Z" level=info msg="CreateContainer within sandbox \"ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a0b8b9cbed2293ae7c8bffaf7cf7b2324c5196d7672e1e129b823d8e202d20c6\"" Jul 2 00:22:58.227364 containerd[1449]: time="2024-07-02T00:22:58.227306246Z" level=info msg="StartContainer for \"a0b8b9cbed2293ae7c8bffaf7cf7b2324c5196d7672e1e129b823d8e202d20c6\"" Jul 2 00:22:58.265462 systemd[1]: Started cri-containerd-a0b8b9cbed2293ae7c8bffaf7cf7b2324c5196d7672e1e129b823d8e202d20c6.scope - libcontainer container a0b8b9cbed2293ae7c8bffaf7cf7b2324c5196d7672e1e129b823d8e202d20c6. Jul 2 00:22:58.320856 containerd[1449]: time="2024-07-02T00:22:58.320804993Z" level=info msg="StartContainer for \"a0b8b9cbed2293ae7c8bffaf7cf7b2324c5196d7672e1e129b823d8e202d20c6\" returns successfully" Jul 2 00:22:58.403880 kubelet[2577]: E0702 00:22:58.403836 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:22:59.559566 kubelet[2577]: I0702 00:22:59.559521 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7889d5969c-8bv75" podStartSLOduration=48.292989393 podCreationTimestamp="2024-07-02 00:22:00 +0000 UTC" firstStartedPulling="2024-07-02 00:22:46.929468016 +0000 UTC m=+67.115076015" lastFinishedPulling="2024-07-02 00:22:58.195947772 +0000 UTC m=+78.381555771" observedRunningTime="2024-07-02 00:22:59.557197683 +0000 UTC m=+79.742805682" watchObservedRunningTime="2024-07-02 00:22:59.559469149 +0000 UTC m=+79.745077148" Jul 2 00:23:01.749688 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:59346.service - OpenSSH per-connection server daemon (10.0.0.1:59346). Jul 2 00:23:02.361222 sshd[4691]: Accepted publickey for core from 10.0.0.1 port 59346 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:02.363084 sshd[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:02.367310 systemd-logind[1431]: New session 17 of user core. Jul 2 00:23:02.372266 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:23:02.534283 sshd[4691]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:02.538408 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:59346.service: Deactivated successfully. Jul 2 00:23:02.539060 systemd-logind[1431]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:23:02.542522 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:23:02.546845 systemd-logind[1431]: Removed session 17. Jul 2 00:23:03.272446 containerd[1449]: time="2024-07-02T00:23:03.272362888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:03.321952 containerd[1449]: time="2024-07-02T00:23:03.321862521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:23:03.370756 containerd[1449]: time="2024-07-02T00:23:03.370676320Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:03.466476 containerd[1449]: time="2024-07-02T00:23:03.466371592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:03.467709 containerd[1449]: time="2024-07-02T00:23:03.467250814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 5.271061441s" Jul 2 00:23:03.467709 containerd[1449]: time="2024-07-02T00:23:03.467293607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:23:03.469516 containerd[1449]: time="2024-07-02T00:23:03.469489940Z" level=info msg="CreateContainer within sandbox \"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:23:04.576505 containerd[1449]: time="2024-07-02T00:23:04.576433793Z" level=info msg="CreateContainer within sandbox \"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0d700651736497c240887a7d0eba5a2972b2c43cb2067325ec780b323830e6e8\"" Jul 2 00:23:04.577197 containerd[1449]: time="2024-07-02T00:23:04.577151687Z" level=info msg="StartContainer for \"0d700651736497c240887a7d0eba5a2972b2c43cb2067325ec780b323830e6e8\"" Jul 2 00:23:04.615284 systemd[1]: Started cri-containerd-0d700651736497c240887a7d0eba5a2972b2c43cb2067325ec780b323830e6e8.scope - libcontainer container 0d700651736497c240887a7d0eba5a2972b2c43cb2067325ec780b323830e6e8. Jul 2 00:23:04.880614 containerd[1449]: time="2024-07-02T00:23:04.880446447Z" level=info msg="StartContainer for \"0d700651736497c240887a7d0eba5a2972b2c43cb2067325ec780b323830e6e8\" returns successfully" Jul 2 00:23:04.998469 kubelet[2577]: I0702 00:23:04.998437 2577 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:23:04.998469 kubelet[2577]: I0702 00:23:04.998478 2577 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:23:05.455590 kubelet[2577]: I0702 00:23:05.455360 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-c6f8d" podStartSLOduration=47.070619342 podCreationTimestamp="2024-07-02 00:22:00 +0000 UTC" firstStartedPulling="2024-07-02 00:22:45.083544163 +0000 UTC m=+65.269152162" lastFinishedPulling="2024-07-02 00:23:03.468244567 +0000 UTC m=+83.653852556" observedRunningTime="2024-07-02 00:23:05.4545329 +0000 UTC m=+85.640140909" watchObservedRunningTime="2024-07-02 00:23:05.455319736 +0000 UTC m=+85.640927735" Jul 2 00:23:05.902948 kubelet[2577]: E0702 00:23:05.901753 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:23:07.547499 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:59360.service - OpenSSH per-connection server daemon (10.0.0.1:59360). Jul 2 00:23:07.587741 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 59360 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:07.589347 sshd[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:07.593744 systemd-logind[1431]: New session 18 of user core. Jul 2 00:23:07.602242 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:23:07.768199 sshd[4751]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:07.772582 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:59360.service: Deactivated successfully. Jul 2 00:23:07.774507 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:23:07.775214 systemd-logind[1431]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:23:07.776268 systemd-logind[1431]: Removed session 18. Jul 2 00:23:12.785539 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:42280.service - OpenSSH per-connection server daemon (10.0.0.1:42280). Jul 2 00:23:12.832976 sshd[4771]: Accepted publickey for core from 10.0.0.1 port 42280 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:12.835028 sshd[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:12.840527 systemd-logind[1431]: New session 19 of user core. Jul 2 00:23:12.847439 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:23:12.961407 sshd[4771]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:12.965745 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:42280.service: Deactivated successfully. Jul 2 00:23:12.968249 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:23:12.969044 systemd-logind[1431]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:23:12.970026 systemd-logind[1431]: Removed session 19. Jul 2 00:23:16.901748 kubelet[2577]: E0702 00:23:16.901684 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:23:17.979708 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:42284.service - OpenSSH per-connection server daemon (10.0.0.1:42284). Jul 2 00:23:18.011050 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 42284 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:18.013148 sshd[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:18.017768 systemd-logind[1431]: New session 20 of user core. Jul 2 00:23:18.023678 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:23:18.202270 sshd[4785]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:18.206399 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:42284.service: Deactivated successfully. Jul 2 00:23:18.208970 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:23:18.209925 systemd-logind[1431]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:23:18.211361 systemd-logind[1431]: Removed session 20. Jul 2 00:23:23.215181 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:57556.service - OpenSSH per-connection server daemon (10.0.0.1:57556). Jul 2 00:23:23.254017 sshd[4810]: Accepted publickey for core from 10.0.0.1 port 57556 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:23.256134 sshd[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:23.261361 systemd-logind[1431]: New session 21 of user core. Jul 2 00:23:23.276434 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:23:23.405330 sshd[4810]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:23.418360 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:57556.service: Deactivated successfully. Jul 2 00:23:23.420867 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:23:23.423552 systemd-logind[1431]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:23:23.434852 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:57562.service - OpenSSH per-connection server daemon (10.0.0.1:57562). Jul 2 00:23:23.436557 systemd-logind[1431]: Removed session 21. Jul 2 00:23:23.478005 sshd[4824]: Accepted publickey for core from 10.0.0.1 port 57562 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:23.480281 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:23.485647 systemd-logind[1431]: New session 22 of user core. Jul 2 00:23:23.493986 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:23:23.952791 sshd[4824]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:23.964436 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:57562.service: Deactivated successfully. Jul 2 00:23:23.966654 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:23:23.968539 systemd-logind[1431]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:23:23.974561 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:57566.service - OpenSSH per-connection server daemon (10.0.0.1:57566). Jul 2 00:23:23.975957 systemd-logind[1431]: Removed session 22. Jul 2 00:23:24.016998 sshd[4837]: Accepted publickey for core from 10.0.0.1 port 57566 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:24.019043 sshd[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:24.024204 systemd-logind[1431]: New session 23 of user core. Jul 2 00:23:24.031393 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:23:25.079423 sshd[4837]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:25.094045 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:57566.service: Deactivated successfully. Jul 2 00:23:25.097722 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:23:25.099399 systemd-logind[1431]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:23:25.117626 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:57574.service - OpenSSH per-connection server daemon (10.0.0.1:57574). Jul 2 00:23:25.121639 systemd-logind[1431]: Removed session 23. Jul 2 00:23:25.155549 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 57574 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:25.158021 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:25.164261 systemd-logind[1431]: New session 24 of user core. Jul 2 00:23:25.174309 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:23:25.570153 sshd[4880]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:25.582782 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:57574.service: Deactivated successfully. Jul 2 00:23:25.585747 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:23:25.589175 systemd-logind[1431]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:23:25.604658 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:57582.service - OpenSSH per-connection server daemon (10.0.0.1:57582). Jul 2 00:23:25.606420 systemd-logind[1431]: Removed session 24. Jul 2 00:23:25.645363 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 57582 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:25.647798 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:25.656605 systemd-logind[1431]: New session 25 of user core. Jul 2 00:23:25.665231 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:23:25.982380 sshd[4894]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:25.987448 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:57582.service: Deactivated successfully. Jul 2 00:23:25.989850 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:23:25.990684 systemd-logind[1431]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:23:25.991835 systemd-logind[1431]: Removed session 25. Jul 2 00:23:26.365604 kubelet[2577]: I0702 00:23:26.365417 2577 topology_manager.go:215] "Topology Admit Handler" podUID="70d7b3c9-6f04-45b8-82be-660dea2f03d7" podNamespace="calico-apiserver" podName="calico-apiserver-68f7485b7-6xc84" Jul 2 00:23:26.375799 systemd[1]: Created slice kubepods-besteffort-pod70d7b3c9_6f04_45b8_82be_660dea2f03d7.slice - libcontainer container kubepods-besteffort-pod70d7b3c9_6f04_45b8_82be_660dea2f03d7.slice. Jul 2 00:23:26.381052 kubelet[2577]: W0702 00:23:26.380980 2577 reflector.go:535] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 00:23:26.381052 kubelet[2577]: E0702 00:23:26.381058 2577 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 00:23:26.381431 kubelet[2577]: W0702 00:23:26.381123 2577 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 00:23:26.381431 kubelet[2577]: E0702 00:23:26.381140 2577 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 00:23:26.497860 kubelet[2577]: I0702 00:23:26.497762 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8l7c\" (UniqueName: \"kubernetes.io/projected/70d7b3c9-6f04-45b8-82be-660dea2f03d7-kube-api-access-l8l7c\") pod \"calico-apiserver-68f7485b7-6xc84\" (UID: \"70d7b3c9-6f04-45b8-82be-660dea2f03d7\") " pod="calico-apiserver/calico-apiserver-68f7485b7-6xc84" Jul 2 00:23:26.497860 kubelet[2577]: I0702 00:23:26.497844 2577 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/70d7b3c9-6f04-45b8-82be-660dea2f03d7-calico-apiserver-certs\") pod \"calico-apiserver-68f7485b7-6xc84\" (UID: \"70d7b3c9-6f04-45b8-82be-660dea2f03d7\") " pod="calico-apiserver/calico-apiserver-68f7485b7-6xc84" Jul 2 00:23:27.885125 containerd[1449]: time="2024-07-02T00:23:27.885030222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f7485b7-6xc84,Uid:70d7b3c9-6f04-45b8-82be-660dea2f03d7,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:23:28.726251 systemd-networkd[1385]: cali9146e04d913: Link UP Jul 2 00:23:28.726505 systemd-networkd[1385]: cali9146e04d913: Gained carrier Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.504 [INFO][4937] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0 calico-apiserver-68f7485b7- calico-apiserver 70d7b3c9-6f04-45b8-82be-660dea2f03d7 1189 0 2024-07-02 00:23:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f7485b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68f7485b7-6xc84 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9146e04d913 [] []}} ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.504 [INFO][4937] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.559 [INFO][4951] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" HandleID="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Workload="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.678 [INFO][4951] ipam_plugin.go 264: Auto assigning IP ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" HandleID="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Workload="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000590430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68f7485b7-6xc84", "timestamp":"2024-07-02 00:23:28.559788922 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.678 [INFO][4951] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.678 [INFO][4951] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.678 [INFO][4951] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.696 [INFO][4951] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.700 [INFO][4951] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.704 [INFO][4951] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.706 [INFO][4951] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.708 [INFO][4951] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.708 [INFO][4951] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.710 [INFO][4951] ipam.go 1685: Creating new handle: k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914 Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.713 [INFO][4951] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.719 [INFO][4951] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.719 [INFO][4951] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" host="localhost" Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.719 [INFO][4951] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:28.740334 containerd[1449]: 2024-07-02 00:23:28.719 [INFO][4951] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" HandleID="k8s-pod-network.4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Workload="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" Jul 2 00:23:28.741204 containerd[1449]: 2024-07-02 00:23:28.723 [INFO][4937] k8s.go 386: Populated endpoint ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0", GenerateName:"calico-apiserver-68f7485b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70d7b3c9-6f04-45b8-82be-660dea2f03d7", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f7485b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68f7485b7-6xc84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9146e04d913", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:28.741204 containerd[1449]: 2024-07-02 00:23:28.724 [INFO][4937] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" Jul 2 00:23:28.741204 containerd[1449]: 2024-07-02 00:23:28.724 [INFO][4937] dataplane_linux.go 68: Setting the host side veth name to cali9146e04d913 ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" Jul 2 00:23:28.741204 containerd[1449]: 2024-07-02 00:23:28.726 [INFO][4937] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" Jul 2 00:23:28.741204 containerd[1449]: 2024-07-02 00:23:28.727 [INFO][4937] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0", GenerateName:"calico-apiserver-68f7485b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70d7b3c9-6f04-45b8-82be-660dea2f03d7", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f7485b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914", Pod:"calico-apiserver-68f7485b7-6xc84", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9146e04d913", MAC:"8e:44:d9:91:d0:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:28.741204 containerd[1449]: 2024-07-02 00:23:28.736 [INFO][4937] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914" Namespace="calico-apiserver" Pod="calico-apiserver-68f7485b7-6xc84" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f7485b7--6xc84-eth0" Jul 2 00:23:28.821952 containerd[1449]: time="2024-07-02T00:23:28.821224570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:28.821952 containerd[1449]: time="2024-07-02T00:23:28.821904462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:28.821952 containerd[1449]: time="2024-07-02T00:23:28.821920192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:28.821952 containerd[1449]: time="2024-07-02T00:23:28.821930712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:28.850357 systemd[1]: Started cri-containerd-4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914.scope - libcontainer container 4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914. Jul 2 00:23:28.863641 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:23:28.897004 containerd[1449]: time="2024-07-02T00:23:28.896442474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f7485b7-6xc84,Uid:70d7b3c9-6f04-45b8-82be-660dea2f03d7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914\"" Jul 2 00:23:28.898711 containerd[1449]: time="2024-07-02T00:23:28.898634523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:23:29.318478 systemd[1]: run-containerd-runc-k8s.io-4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914-runc.l5oht7.mount: Deactivated successfully. Jul 2 00:23:29.773358 systemd-networkd[1385]: cali9146e04d913: Gained IPv6LL Jul 2 00:23:31.009635 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:55638.service - OpenSSH per-connection server daemon (10.0.0.1:55638). Jul 2 00:23:31.041332 sshd[5016]: Accepted publickey for core from 10.0.0.1 port 55638 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:31.043457 sshd[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:31.050760 systemd-logind[1431]: New session 26 of user core. Jul 2 00:23:31.058409 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:23:31.210268 sshd[5016]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:31.216286 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:55638.service: Deactivated successfully. Jul 2 00:23:31.219635 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:23:31.220913 systemd-logind[1431]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:23:31.222189 systemd-logind[1431]: Removed session 26. Jul 2 00:23:33.767699 containerd[1449]: time="2024-07-02T00:23:33.767640129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:33.795195 containerd[1449]: time="2024-07-02T00:23:33.795051803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:23:33.864204 containerd[1449]: time="2024-07-02T00:23:33.864128890Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:33.934670 containerd[1449]: time="2024-07-02T00:23:33.934589558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:33.935713 containerd[1449]: time="2024-07-02T00:23:33.935644390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 5.036958196s" Jul 2 00:23:33.935713 containerd[1449]: time="2024-07-02T00:23:33.935705166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:23:33.937727 containerd[1449]: time="2024-07-02T00:23:33.937678657Z" level=info msg="CreateContainer within sandbox \"4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:23:34.901125 kubelet[2577]: E0702 00:23:34.901044 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:23:35.141747 containerd[1449]: time="2024-07-02T00:23:35.141659570Z" level=info msg="CreateContainer within sandbox \"4da36241d0a69be2d3bea5827f295667bb5e6264c55200814e211230a227a914\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dbea54f5a6a4e8d24793ed258e24a1223e850d7350e4148d92c87d8b73548843\"" Jul 2 00:23:35.142554 containerd[1449]: time="2024-07-02T00:23:35.142495002Z" level=info msg="StartContainer for \"dbea54f5a6a4e8d24793ed258e24a1223e850d7350e4148d92c87d8b73548843\"" Jul 2 00:23:35.183259 systemd[1]: Started cri-containerd-dbea54f5a6a4e8d24793ed258e24a1223e850d7350e4148d92c87d8b73548843.scope - libcontainer container dbea54f5a6a4e8d24793ed258e24a1223e850d7350e4148d92c87d8b73548843. Jul 2 00:23:35.438009 containerd[1449]: time="2024-07-02T00:23:35.437555197Z" level=info msg="StartContainer for \"dbea54f5a6a4e8d24793ed258e24a1223e850d7350e4148d92c87d8b73548843\" returns successfully" Jul 2 00:23:35.705046 kubelet[2577]: I0702 00:23:35.704074 2577 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68f7485b7-6xc84" podStartSLOduration=4.666412118 podCreationTimestamp="2024-07-02 00:23:26 +0000 UTC" firstStartedPulling="2024-07-02 00:23:28.898344749 +0000 UTC m=+109.083952748" lastFinishedPulling="2024-07-02 00:23:33.935969783 +0000 UTC m=+114.121577792" observedRunningTime="2024-07-02 00:23:35.70392151 +0000 UTC m=+115.889529509" watchObservedRunningTime="2024-07-02 00:23:35.704037162 +0000 UTC m=+115.889645161" Jul 2 00:23:36.226079 systemd[1]: Started sshd@26-10.0.0.95:22-10.0.0.1:55644.service - OpenSSH per-connection server daemon (10.0.0.1:55644). Jul 2 00:23:36.287587 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 55644 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:36.289830 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:36.295096 systemd-logind[1431]: New session 27 of user core. Jul 2 00:23:36.305467 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:23:36.453753 sshd[5086]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:36.458429 systemd-logind[1431]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:23:36.460040 systemd[1]: sshd@26-10.0.0.95:22-10.0.0.1:55644.service: Deactivated successfully. Jul 2 00:23:36.463948 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:23:36.466558 systemd-logind[1431]: Removed session 27. Jul 2 00:23:39.904817 containerd[1449]: time="2024-07-02T00:23:39.904605686Z" level=info msg="StopPodSandbox for \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\"" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.960 [WARNING][5123] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--fp5cw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9761039f-dc00-496a-bcc0-b4763a123012", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc", Pod:"coredns-5dd5756b68-fp5cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58c7f6b7a18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.960 [INFO][5123] k8s.go 608: Cleaning up netns ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.960 [INFO][5123] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" iface="eth0" netns="" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.960 [INFO][5123] k8s.go 615: Releasing IP address(es) ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.961 [INFO][5123] utils.go 188: Calico CNI releasing IP address ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.993 [INFO][5132] ipam_plugin.go 411: Releasing address using handleID ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.993 [INFO][5132] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:39.993 [INFO][5132] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:40.121 [WARNING][5132] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:40.121 [INFO][5132] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:40.123 [INFO][5132] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:40.131175 containerd[1449]: 2024-07-02 00:23:40.128 [INFO][5123] k8s.go 621: Teardown processing complete. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.131887 containerd[1449]: time="2024-07-02T00:23:40.131232832Z" level=info msg="TearDown network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\" successfully" Jul 2 00:23:40.131887 containerd[1449]: time="2024-07-02T00:23:40.131263711Z" level=info msg="StopPodSandbox for \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\" returns successfully" Jul 2 00:23:40.131949 containerd[1449]: time="2024-07-02T00:23:40.131915531Z" level=info msg="RemovePodSandbox for \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\"" Jul 2 00:23:40.135378 containerd[1449]: time="2024-07-02T00:23:40.135347447Z" level=info msg="Forcibly stopping sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\"" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.188 [WARNING][5155] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--fp5cw-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9761039f-dc00-496a-bcc0-b4763a123012", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42da0b0b69a03785e893f487de04c56e90b68fa269564fc95f215714a7d266bc", Pod:"coredns-5dd5756b68-fp5cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58c7f6b7a18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.188 [INFO][5155] k8s.go 608: Cleaning up netns ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.188 [INFO][5155] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" iface="eth0" netns="" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.188 [INFO][5155] k8s.go 615: Releasing IP address(es) ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.188 [INFO][5155] utils.go 188: Calico CNI releasing IP address ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.228 [INFO][5162] ipam_plugin.go 411: Releasing address using handleID ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.228 [INFO][5162] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.228 [INFO][5162] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.251 [WARNING][5162] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.251 [INFO][5162] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" HandleID="k8s-pod-network.f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Workload="localhost-k8s-coredns--5dd5756b68--fp5cw-eth0" Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.275 [INFO][5162] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:40.283238 containerd[1449]: 2024-07-02 00:23:40.280 [INFO][5155] k8s.go 621: Teardown processing complete. ContainerID="f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56" Jul 2 00:23:40.283907 containerd[1449]: time="2024-07-02T00:23:40.283313465Z" level=info msg="TearDown network for sandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\" successfully" Jul 2 00:23:40.555614 containerd[1449]: time="2024-07-02T00:23:40.555431363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:40.562449 containerd[1449]: time="2024-07-02T00:23:40.562388646Z" level=info msg="RemovePodSandbox \"f895964f5d3c070be2094f9bd04d434b14f17f93fb1b077a1813706602c13a56\" returns successfully" Jul 2 00:23:40.563195 containerd[1449]: time="2024-07-02T00:23:40.563146638Z" level=info msg="StopPodSandbox for \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\"" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.631 [WARNING][5188] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0", GenerateName:"calico-kube-controllers-7889d5969c-", Namespace:"calico-system", SelfLink:"", UID:"d2f026c7-909c-46f0-8429-002f30b8f45f", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7889d5969c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d", Pod:"calico-kube-controllers-7889d5969c-8bv75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic21b500afcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.631 [INFO][5188] k8s.go 608: Cleaning up netns ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.631 [INFO][5188] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" iface="eth0" netns="" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.632 [INFO][5188] k8s.go 615: Releasing IP address(es) ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.632 [INFO][5188] utils.go 188: Calico CNI releasing IP address ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.667 [INFO][5195] ipam_plugin.go 411: Releasing address using handleID ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.667 [INFO][5195] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.667 [INFO][5195] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.677 [WARNING][5195] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.677 [INFO][5195] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.682 [INFO][5195] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:40.693517 containerd[1449]: 2024-07-02 00:23:40.687 [INFO][5188] k8s.go 621: Teardown processing complete. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.693977 containerd[1449]: time="2024-07-02T00:23:40.693613901Z" level=info msg="TearDown network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\" successfully" Jul 2 00:23:40.693977 containerd[1449]: time="2024-07-02T00:23:40.693661963Z" level=info msg="StopPodSandbox for \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\" returns successfully" Jul 2 00:23:40.695948 containerd[1449]: time="2024-07-02T00:23:40.695558027Z" level=info msg="RemovePodSandbox for \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\"" Jul 2 00:23:40.695948 containerd[1449]: time="2024-07-02T00:23:40.695603684Z" level=info msg="Forcibly stopping sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\"" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.767 [WARNING][5218] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0", GenerateName:"calico-kube-controllers-7889d5969c-", Namespace:"calico-system", SelfLink:"", UID:"d2f026c7-909c-46f0-8429-002f30b8f45f", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7889d5969c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae3bd5b3b02b979e002f81834b98f8f4b8e0d39f90b19a3ab5edfe5abbf3be3d", Pod:"calico-kube-controllers-7889d5969c-8bv75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic21b500afcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.767 [INFO][5218] k8s.go 608: Cleaning up netns ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.767 [INFO][5218] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" iface="eth0" netns="" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.767 [INFO][5218] k8s.go 615: Releasing IP address(es) ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.767 [INFO][5218] utils.go 188: Calico CNI releasing IP address ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.802 [INFO][5226] ipam_plugin.go 411: Releasing address using handleID ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.802 [INFO][5226] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.802 [INFO][5226] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.810 [WARNING][5226] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.810 [INFO][5226] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" HandleID="k8s-pod-network.1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Workload="localhost-k8s-calico--kube--controllers--7889d5969c--8bv75-eth0" Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.812 [INFO][5226] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:40.821801 containerd[1449]: 2024-07-02 00:23:40.817 [INFO][5218] k8s.go 621: Teardown processing complete. ContainerID="1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c" Jul 2 00:23:40.825146 containerd[1449]: time="2024-07-02T00:23:40.822618802Z" level=info msg="TearDown network for sandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\" successfully" Jul 2 00:23:40.841628 containerd[1449]: time="2024-07-02T00:23:40.841545573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:40.841834 containerd[1449]: time="2024-07-02T00:23:40.841644873Z" level=info msg="RemovePodSandbox \"1c61c1e232e147d1c5f282300444a364a52a2309e64fcaeb52ff1f02b1c8295c\" returns successfully" Jul 2 00:23:40.842375 containerd[1449]: time="2024-07-02T00:23:40.842234964Z" level=info msg="StopPodSandbox for \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\"" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.888 [WARNING][5250] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c6f8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb1123fc-5807-4f3a-afae-4341c695ae1f", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3", Pod:"csi-node-driver-c6f8d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b5aeb09e27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.888 [INFO][5250] k8s.go 608: Cleaning up netns ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.888 [INFO][5250] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" iface="eth0" netns="" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.888 [INFO][5250] k8s.go 615: Releasing IP address(es) ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.888 [INFO][5250] utils.go 188: Calico CNI releasing IP address ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.919 [INFO][5258] ipam_plugin.go 411: Releasing address using handleID ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.920 [INFO][5258] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.920 [INFO][5258] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.925 [WARNING][5258] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.925 [INFO][5258] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.927 [INFO][5258] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:40.932731 containerd[1449]: 2024-07-02 00:23:40.930 [INFO][5250] k8s.go 621: Teardown processing complete. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:40.933706 containerd[1449]: time="2024-07-02T00:23:40.932753881Z" level=info msg="TearDown network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\" successfully" Jul 2 00:23:40.933706 containerd[1449]: time="2024-07-02T00:23:40.932782786Z" level=info msg="StopPodSandbox for \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\" returns successfully" Jul 2 00:23:40.933706 containerd[1449]: time="2024-07-02T00:23:40.933357789Z" level=info msg="RemovePodSandbox for \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\"" Jul 2 00:23:40.933706 containerd[1449]: time="2024-07-02T00:23:40.933398867Z" level=info msg="Forcibly stopping sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\"" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:40.980 [WARNING][5282] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c6f8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb1123fc-5807-4f3a-afae-4341c695ae1f", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a80973108d2f924f961270a9ba055190ff6310bb1f1b82d3bb3159f8af6311b3", Pod:"csi-node-driver-c6f8d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1b5aeb09e27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:40.981 [INFO][5282] k8s.go 608: Cleaning up netns ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:40.981 [INFO][5282] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" iface="eth0" netns="" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:40.981 [INFO][5282] k8s.go 615: Releasing IP address(es) ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:40.981 [INFO][5282] utils.go 188: Calico CNI releasing IP address ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:41.005 [INFO][5290] ipam_plugin.go 411: Releasing address using handleID ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:41.005 [INFO][5290] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:41.005 [INFO][5290] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:41.013 [WARNING][5290] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:41.013 [INFO][5290] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" HandleID="k8s-pod-network.4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Workload="localhost-k8s-csi--node--driver--c6f8d-eth0" Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:41.015 [INFO][5290] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:41.021093 containerd[1449]: 2024-07-02 00:23:41.018 [INFO][5282] k8s.go 621: Teardown processing complete. ContainerID="4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9" Jul 2 00:23:41.021707 containerd[1449]: time="2024-07-02T00:23:41.021179108Z" level=info msg="TearDown network for sandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\" successfully" Jul 2 00:23:41.030571 containerd[1449]: time="2024-07-02T00:23:41.030457409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:41.030571 containerd[1449]: time="2024-07-02T00:23:41.030593048Z" level=info msg="RemovePodSandbox \"4fee747b1ef3057ddaba8d9c7cf53df6bf351faf58dc12454133ec45f19564b9\" returns successfully" Jul 2 00:23:41.031382 containerd[1449]: time="2024-07-02T00:23:41.031319470Z" level=info msg="StopPodSandbox for \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\"" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.082 [WARNING][5314] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xnbjp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bad383c5-33ee-4ea7-a464-f6479e4f0591", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca", Pod:"coredns-5dd5756b68-xnbjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24864ddaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.082 [INFO][5314] k8s.go 608: Cleaning up netns ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.082 [INFO][5314] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" iface="eth0" netns="" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.082 [INFO][5314] k8s.go 615: Releasing IP address(es) ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.082 [INFO][5314] utils.go 188: Calico CNI releasing IP address ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.112 [INFO][5322] ipam_plugin.go 411: Releasing address using handleID ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.112 [INFO][5322] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.112 [INFO][5322] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.117 [WARNING][5322] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.117 [INFO][5322] ipam_plugin.go 439: Releasing address using workloadID ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.118 [INFO][5322] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:41.123855 containerd[1449]: 2024-07-02 00:23:41.121 [INFO][5314] k8s.go 621: Teardown processing complete. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.123855 containerd[1449]: time="2024-07-02T00:23:41.123781936Z" level=info msg="TearDown network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\" successfully" Jul 2 00:23:41.123855 containerd[1449]: time="2024-07-02T00:23:41.123813346Z" level=info msg="StopPodSandbox for \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\" returns successfully" Jul 2 00:23:41.126175 containerd[1449]: time="2024-07-02T00:23:41.126138121Z" level=info msg="RemovePodSandbox for \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\"" Jul 2 00:23:41.126293 containerd[1449]: time="2024-07-02T00:23:41.126184089Z" level=info msg="Forcibly stopping sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\"" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.164 [WARNING][5344] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xnbjp-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bad383c5-33ee-4ea7-a464-f6479e4f0591", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 21, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11ec2cb3538019b20ce0975ea3354a2bc0cd8b59bcbf44cd63a709e5a2d466ca", Pod:"coredns-5dd5756b68-xnbjp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24864ddaa5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.164 [INFO][5344] k8s.go 608: Cleaning up netns ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.164 [INFO][5344] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" iface="eth0" netns="" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.164 [INFO][5344] k8s.go 615: Releasing IP address(es) ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.164 [INFO][5344] utils.go 188: Calico CNI releasing IP address ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.187 [INFO][5352] ipam_plugin.go 411: Releasing address using handleID ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.187 [INFO][5352] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.187 [INFO][5352] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.194 [WARNING][5352] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.194 [INFO][5352] ipam_plugin.go 439: Releasing address using workloadID ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" HandleID="k8s-pod-network.478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Workload="localhost-k8s-coredns--5dd5756b68--xnbjp-eth0" Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.196 [INFO][5352] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:41.201508 containerd[1449]: 2024-07-02 00:23:41.198 [INFO][5344] k8s.go 621: Teardown processing complete. ContainerID="478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8" Jul 2 00:23:41.202224 containerd[1449]: time="2024-07-02T00:23:41.201558827Z" level=info msg="TearDown network for sandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\" successfully" Jul 2 00:23:41.209760 containerd[1449]: time="2024-07-02T00:23:41.209698997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:41.209896 containerd[1449]: time="2024-07-02T00:23:41.209782598Z" level=info msg="RemovePodSandbox \"478def65ec128cd6d2058b9079099a3b4e50ceefee09c4c7bba147d8c56078e8\" returns successfully" Jul 2 00:23:41.473614 systemd[1]: Started sshd@27-10.0.0.95:22-10.0.0.1:47128.service - OpenSSH per-connection server daemon (10.0.0.1:47128). Jul 2 00:23:41.518437 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 47128 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:41.520848 sshd[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:41.526964 systemd-logind[1431]: New session 28 of user core. Jul 2 00:23:41.532351 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:23:41.884267 sshd[5360]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:41.889546 systemd[1]: sshd@27-10.0.0.95:22-10.0.0.1:47128.service: Deactivated successfully. Jul 2 00:23:41.891923 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:23:41.897465 systemd-logind[1431]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:23:41.900238 systemd-logind[1431]: Removed session 28. Jul 2 00:23:46.690508 systemd[1]: Started sshd@28-10.0.0.95:22-10.0.0.1:47134.service - OpenSSH per-connection server daemon (10.0.0.1:47134). Jul 2 00:23:46.723903 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 47134 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:46.725585 sshd[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:46.730362 systemd-logind[1431]: New session 29 of user core. Jul 2 00:23:46.741263 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:23:46.948765 sshd[5387]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:46.953670 systemd[1]: sshd@28-10.0.0.95:22-10.0.0.1:47134.service: Deactivated successfully. Jul 2 00:23:46.956041 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:23:46.956865 systemd-logind[1431]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:23:46.958022 systemd-logind[1431]: Removed session 29. Jul 2 00:23:51.961154 systemd[1]: Started sshd@29-10.0.0.95:22-10.0.0.1:60690.service - OpenSSH per-connection server daemon (10.0.0.1:60690). Jul 2 00:23:52.001678 sshd[5404]: Accepted publickey for core from 10.0.0.1 port 60690 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:52.003708 sshd[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:52.009671 systemd-logind[1431]: New session 30 of user core. Jul 2 00:23:52.016313 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 2 00:23:52.420772 sshd[5404]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:52.426837 systemd[1]: sshd@29-10.0.0.95:22-10.0.0.1:60690.service: Deactivated successfully. Jul 2 00:23:52.428932 systemd[1]: session-30.scope: Deactivated successfully. Jul 2 00:23:52.430163 systemd-logind[1431]: Session 30 logged out. Waiting for processes to exit. Jul 2 00:23:52.431226 systemd-logind[1431]: Removed session 30. Jul 2 00:23:54.901056 kubelet[2577]: E0702 00:23:54.901011 2577 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:23:57.434160 systemd[1]: Started sshd@30-10.0.0.95:22-10.0.0.1:60702.service - OpenSSH per-connection server daemon (10.0.0.1:60702). Jul 2 00:23:57.467690 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 60702 ssh2: RSA SHA256:ChgTELiNQDNPBLl7R+AT0aWahUNzCvNERC/D+nH4IGI Jul 2 00:23:57.469715 sshd[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:57.474466 systemd-logind[1431]: New session 31 of user core. Jul 2 00:23:57.485435 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 2 00:23:57.752480 sshd[5446]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:57.757699 systemd[1]: sshd@30-10.0.0.95:22-10.0.0.1:60702.service: Deactivated successfully. Jul 2 00:23:57.760382 systemd[1]: session-31.scope: Deactivated successfully. Jul 2 00:23:57.761484 systemd-logind[1431]: Session 31 logged out. Waiting for processes to exit. Jul 2 00:23:57.762840 systemd-logind[1431]: Removed session 31.