Sep 4 17:32:27.926810 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:32:27.926832 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:32:27.926844 kernel: BIOS-provided physical RAM map: Sep 4 17:32:27.926850 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:32:27.926856 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 17:32:27.926862 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 17:32:27.926870 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 17:32:27.926876 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 17:32:27.926882 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 4 17:32:27.926896 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 4 17:32:27.926906 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 4 17:32:27.926913 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 4 17:32:27.926919 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 4 17:32:27.926926 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 4 17:32:27.926934 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 4 17:32:27.926943 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 17:32:27.926949 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 4 17:32:27.926956 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 4 17:32:27.926963 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 17:32:27.926969 kernel: NX (Execute Disable) protection: active Sep 4 17:32:27.926976 kernel: APIC: Static calls initialized Sep 4 17:32:27.926983 kernel: efi: EFI v2.7 by EDK II Sep 4 17:32:27.926990 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4f9018 Sep 4 17:32:27.926997 kernel: SMBIOS 2.8 present. Sep 4 17:32:27.927003 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Sep 4 17:32:27.927010 kernel: Hypervisor detected: KVM Sep 4 17:32:27.927017 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:32:27.927026 kernel: kvm-clock: using sched offset of 4219462571 cycles Sep 4 17:32:27.927033 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:32:27.927040 kernel: tsc: Detected 2794.746 MHz processor Sep 4 17:32:27.927047 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:32:27.927055 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:32:27.927062 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 4 17:32:27.927069 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:32:27.927076 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:32:27.927082 kernel: Using GB pages for direct mapping Sep 4 17:32:27.927092 kernel: Secure boot disabled Sep 4 17:32:27.927099 kernel: ACPI: Early table checksum verification disabled Sep 4 17:32:27.927106 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 17:32:27.927113 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:32:27.927123 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:27.927130 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:27.927140 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 17:32:27.927147 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:27.927154 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:27.927162 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:27.927169 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 17:32:27.927176 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Sep 4 17:32:27.927183 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Sep 4 17:32:27.927190 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 17:32:27.927200 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Sep 4 17:32:27.927207 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Sep 4 17:32:27.927215 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Sep 4 17:32:27.927222 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Sep 4 17:32:27.927229 kernel: No NUMA configuration found Sep 4 17:32:27.927236 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 4 17:32:27.927243 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 4 17:32:27.927250 kernel: Zone ranges: Sep 4 17:32:27.927257 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:32:27.927266 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 4 17:32:27.927274 kernel: Normal empty Sep 4 17:32:27.927281 kernel: Movable zone start for each node Sep 4 17:32:27.927288 kernel: Early memory node ranges Sep 4 17:32:27.927295 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:32:27.927302 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 17:32:27.927309 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 17:32:27.927316 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 4 17:32:27.927323 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 4 17:32:27.927330 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 4 17:32:27.927340 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 4 17:32:27.927347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:32:27.927354 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:32:27.927361 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 17:32:27.927368 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:32:27.927375 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 4 17:32:27.927383 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 17:32:27.927390 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 4 17:32:27.927397 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:32:27.927407 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:32:27.927414 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:32:27.927421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:32:27.927428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:32:27.927436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:32:27.927443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:32:27.927450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:32:27.927457 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:32:27.927464 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:32:27.927474 kernel: TSC deadline timer available Sep 4 17:32:27.927481 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:32:27.927488 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:32:27.927495 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:32:27.927502 kernel: kvm-guest: setup PV sched yield Sep 4 17:32:27.927509 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Sep 4 17:32:27.927517 kernel: Booting paravirtualized kernel on KVM Sep 4 17:32:27.927524 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:32:27.927531 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:32:27.927541 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:32:27.927548 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:32:27.927555 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:32:27.927562 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:32:27.927570 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:32:27.927579 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:32:27.927586 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:32:27.927593 kernel: random: crng init done Sep 4 17:32:27.927601 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:32:27.927611 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:32:27.927630 kernel: Fallback order for Node 0: 0 Sep 4 17:32:27.927637 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 4 17:32:27.927644 kernel: Policy zone: DMA32 Sep 4 17:32:27.927651 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:32:27.927659 kernel: Memory: 2388204K/2567000K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 178536K reserved, 0K cma-reserved) Sep 4 17:32:27.927666 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:32:27.927674 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:32:27.927681 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:32:27.927691 kernel: Dynamic Preempt: voluntary Sep 4 17:32:27.927698 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:32:27.927706 kernel: rcu: RCU event tracing is enabled. Sep 4 17:32:27.927713 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:32:27.927730 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:32:27.927738 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:32:27.927746 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:32:27.927753 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:32:27.927761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:32:27.927768 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:32:27.927776 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:32:27.927783 kernel: Console: colour dummy device 80x25 Sep 4 17:32:27.927794 kernel: printk: console [ttyS0] enabled Sep 4 17:32:27.927801 kernel: ACPI: Core revision 20230628 Sep 4 17:32:27.927809 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:32:27.927816 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:32:27.927824 kernel: x2apic enabled Sep 4 17:32:27.927834 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:32:27.927842 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:32:27.927849 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:32:27.927857 kernel: kvm-guest: setup PV IPIs Sep 4 17:32:27.927864 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:32:27.927872 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:32:27.927879 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Sep 4 17:32:27.927887 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:32:27.927901 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:32:27.927912 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:32:27.927920 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:32:27.927927 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:32:27.927935 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:32:27.927942 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:32:27.927950 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:32:27.927957 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:32:27.927965 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:32:27.927975 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:32:27.927982 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:32:27.927990 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:32:27.927998 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:32:27.928006 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:32:27.928013 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:32:27.928021 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:32:27.928028 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:32:27.928036 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:32:27.928046 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:32:27.928053 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:32:27.928061 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:32:27.928068 kernel: SELinux: Initializing. Sep 4 17:32:27.928076 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:32:27.928083 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:32:27.928091 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:32:27.928098 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:32:27.928106 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:32:27.928116 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:32:27.928123 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:32:27.928131 kernel: ... version: 0 Sep 4 17:32:27.928138 kernel: ... bit width: 48 Sep 4 17:32:27.928146 kernel: ... generic registers: 6 Sep 4 17:32:27.928153 kernel: ... value mask: 0000ffffffffffff Sep 4 17:32:27.928160 kernel: ... max period: 00007fffffffffff Sep 4 17:32:27.928168 kernel: ... fixed-purpose events: 0 Sep 4 17:32:27.928176 kernel: ... event mask: 000000000000003f Sep 4 17:32:27.928185 kernel: signal: max sigframe size: 1776 Sep 4 17:32:27.928193 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:32:27.928200 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:32:27.928208 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:32:27.928215 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:32:27.928223 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:32:27.928230 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:32:27.928238 kernel: smpboot: Max logical packages: 1 Sep 4 17:32:27.928245 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Sep 4 17:32:27.928255 kernel: devtmpfs: initialized Sep 4 17:32:27.928262 kernel: x86/mm: Memory block size: 128MB Sep 4 17:32:27.928270 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 17:32:27.928278 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 17:32:27.928285 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 4 17:32:27.928293 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 17:32:27.928301 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 17:32:27.928308 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:32:27.928316 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:32:27.928326 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:32:27.928333 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:32:27.928341 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:32:27.928348 kernel: audit: type=2000 audit(1725471148.141:1): state=initialized audit_enabled=0 res=1 Sep 4 17:32:27.928356 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:32:27.928363 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:32:27.928371 kernel: cpuidle: using governor menu Sep 4 17:32:27.928378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:32:27.928386 kernel: dca service started, version 1.12.1 Sep 4 17:32:27.928395 kernel: PCI: Using configuration type 1 for base access Sep 4 17:32:27.928403 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:32:27.928410 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:32:27.928418 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:32:27.928426 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:32:27.928433 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:32:27.928441 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:32:27.928448 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:32:27.928456 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:32:27.928465 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:32:27.928473 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:32:27.928480 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:32:27.928488 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:32:27.928495 kernel: ACPI: Interpreter enabled Sep 4 17:32:27.928502 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:32:27.928510 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:32:27.928517 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:32:27.928525 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:32:27.928535 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:32:27.928542 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:32:27.928729 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:32:27.928742 kernel: acpiphp: Slot [3] registered Sep 4 17:32:27.928750 kernel: acpiphp: Slot [4] registered Sep 4 17:32:27.928758 kernel: acpiphp: Slot [5] registered Sep 4 17:32:27.928765 kernel: acpiphp: Slot [6] registered Sep 4 17:32:27.928773 kernel: acpiphp: Slot [7] registered Sep 4 17:32:27.928783 kernel: acpiphp: Slot [8] registered Sep 4 17:32:27.928791 kernel: acpiphp: Slot [9] registered Sep 4 17:32:27.928798 kernel: acpiphp: Slot [10] registered Sep 4 17:32:27.928805 kernel: acpiphp: Slot [11] registered Sep 4 17:32:27.928813 kernel: acpiphp: Slot [12] registered Sep 4 17:32:27.928820 kernel: acpiphp: Slot [13] registered Sep 4 17:32:27.928828 kernel: acpiphp: Slot [14] registered Sep 4 17:32:27.928835 kernel: acpiphp: Slot [15] registered Sep 4 17:32:27.928842 kernel: acpiphp: Slot [16] registered Sep 4 17:32:27.928852 kernel: acpiphp: Slot [17] registered Sep 4 17:32:27.928860 kernel: acpiphp: Slot [18] registered Sep 4 17:32:27.928867 kernel: acpiphp: Slot [19] registered Sep 4 17:32:27.928875 kernel: acpiphp: Slot [20] registered Sep 4 17:32:27.928882 kernel: acpiphp: Slot [21] registered Sep 4 17:32:27.928898 kernel: acpiphp: Slot [22] registered Sep 4 17:32:27.928905 kernel: acpiphp: Slot [23] registered Sep 4 17:32:27.928913 kernel: acpiphp: Slot [24] registered Sep 4 17:32:27.928921 kernel: acpiphp: Slot [25] registered Sep 4 17:32:27.928928 kernel: acpiphp: Slot [26] registered Sep 4 17:32:27.928938 kernel: acpiphp: Slot [27] registered Sep 4 17:32:27.928945 kernel: acpiphp: Slot [28] registered Sep 4 17:32:27.928953 kernel: acpiphp: Slot [29] registered Sep 4 17:32:27.928960 kernel: acpiphp: Slot [30] registered Sep 4 17:32:27.928968 kernel: acpiphp: Slot [31] registered Sep 4 17:32:27.928975 kernel: PCI host bridge to bus 0000:00 Sep 4 17:32:27.929109 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:32:27.929221 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:32:27.929337 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:32:27.929468 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:32:27.929579 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Sep 4 17:32:27.929710 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:32:27.929848 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:32:27.929990 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:32:27.930123 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:32:27.930248 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:32:27.930368 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:32:27.930488 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:32:27.930607 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:32:27.930743 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:32:27.930901 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:32:27.931028 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:32:27.931148 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:32:27.931280 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:32:27.931401 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 17:32:27.931522 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Sep 4 17:32:27.931665 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 17:32:27.931785 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Sep 4 17:32:27.931915 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:32:27.932045 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:32:27.932170 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:32:27.932292 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 17:32:27.932413 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 4 17:32:27.932544 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:32:27.932687 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:32:27.932814 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 17:32:27.932944 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 4 17:32:27.933074 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:32:27.933194 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:32:27.933314 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Sep 4 17:32:27.933435 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 4 17:32:27.933561 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 17:32:27.933597 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:32:27.933606 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:32:27.933613 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:32:27.933637 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:32:27.933645 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:32:27.933652 kernel: iommu: Default domain type: Translated Sep 4 17:32:27.933660 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:32:27.933667 kernel: efivars: Registered efivars operations Sep 4 17:32:27.933675 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:32:27.933685 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:32:27.933692 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 17:32:27.933700 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 4 17:32:27.933707 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 4 17:32:27.933715 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 4 17:32:27.933840 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:32:27.933973 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:32:27.934095 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:32:27.934109 kernel: vgaarb: loaded Sep 4 17:32:27.934117 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:32:27.934124 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:32:27.934132 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:32:27.934139 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:32:27.934147 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:32:27.934154 kernel: pnp: PnP ACPI init Sep 4 17:32:27.934281 kernel: pnp 00:02: [dma 2] Sep 4 17:32:27.934292 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:32:27.934304 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:32:27.934311 kernel: NET: Registered PF_INET protocol family Sep 4 17:32:27.934319 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:32:27.934327 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:32:27.934334 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:32:27.934342 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:32:27.934350 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:32:27.934357 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:32:27.934365 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:32:27.934375 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:32:27.934383 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:32:27.934390 kernel: NET: Registered PF_XDP protocol family Sep 4 17:32:27.934512 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 17:32:27.934649 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 17:32:27.934769 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:32:27.934887 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:32:27.935010 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:32:27.935125 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:32:27.935234 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Sep 4 17:32:27.935357 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:32:27.935477 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:32:27.935488 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:32:27.935496 kernel: Initialise system trusted keyrings Sep 4 17:32:27.935503 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:32:27.935511 kernel: Key type asymmetric registered Sep 4 17:32:27.935522 kernel: Asymmetric key parser 'x509' registered Sep 4 17:32:27.935530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:32:27.935537 kernel: io scheduler mq-deadline registered Sep 4 17:32:27.935545 kernel: io scheduler kyber registered Sep 4 17:32:27.935552 kernel: io scheduler bfq registered Sep 4 17:32:27.935560 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:32:27.935568 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:32:27.935576 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:32:27.935584 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:32:27.935594 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:32:27.935602 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:32:27.935612 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:32:27.935657 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:32:27.935670 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:32:27.935832 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:32:27.935845 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 17:32:27.935990 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:32:27.936131 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:32:27 UTC (1725471147) Sep 4 17:32:27.936266 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:32:27.936279 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:32:27.936290 kernel: efifb: probing for efifb Sep 4 17:32:27.936301 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 4 17:32:27.936311 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 4 17:32:27.936322 kernel: efifb: scrolling: redraw Sep 4 17:32:27.936332 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 4 17:32:27.936347 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 17:32:27.936358 kernel: fb0: EFI VGA frame buffer device Sep 4 17:32:27.936368 kernel: pstore: Using crash dump compression: deflate Sep 4 17:32:27.936379 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:32:27.936390 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:32:27.936400 kernel: Segment Routing with IPv6 Sep 4 17:32:27.936411 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:32:27.936422 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:32:27.936432 kernel: Key type dns_resolver registered Sep 4 17:32:27.936442 kernel: IPI shorthand broadcast: enabled Sep 4 17:32:27.936456 kernel: sched_clock: Marking stable (703004373, 111915860)->(827621362, -12701129) Sep 4 17:32:27.936469 kernel: registered taskstats version 1 Sep 4 17:32:27.936480 kernel: Loading compiled-in X.509 certificates Sep 4 17:32:27.936490 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:32:27.936501 kernel: Key type .fscrypt registered Sep 4 17:32:27.936515 kernel: Key type fscrypt-provisioning registered Sep 4 17:32:27.936528 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:32:27.936542 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:32:27.936555 kernel: ima: No architecture policies found Sep 4 17:32:27.936568 kernel: clk: Disabling unused clocks Sep 4 17:32:27.936584 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:32:27.936598 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:32:27.936611 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:32:27.936773 kernel: Run /init as init process Sep 4 17:32:27.936788 kernel: with arguments: Sep 4 17:32:27.936799 kernel: /init Sep 4 17:32:27.936809 kernel: with environment: Sep 4 17:32:27.936819 kernel: HOME=/ Sep 4 17:32:27.936830 kernel: TERM=linux Sep 4 17:32:27.936841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:32:27.936853 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:32:27.936867 systemd[1]: Detected virtualization kvm. Sep 4 17:32:27.936881 systemd[1]: Detected architecture x86-64. Sep 4 17:32:27.936903 systemd[1]: Running in initrd. Sep 4 17:32:27.936912 systemd[1]: No hostname configured, using default hostname. Sep 4 17:32:27.936920 systemd[1]: Hostname set to . Sep 4 17:32:27.936929 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:32:27.936937 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:32:27.936945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:32:27.936957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:32:27.936966 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:32:27.936975 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:32:27.936983 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:32:27.936992 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:32:27.937002 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:32:27.937011 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:32:27.937022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:32:27.937030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:32:27.937038 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:32:27.937046 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:32:27.937055 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:32:27.937063 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:32:27.937072 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:32:27.937080 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:32:27.937089 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:32:27.937100 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:32:27.937108 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:32:27.937117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:32:27.937125 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:32:27.937133 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:32:27.937142 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:32:27.937150 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:32:27.937159 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:32:27.937170 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:32:27.937178 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:32:27.937186 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:32:27.937195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:27.937203 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:32:27.937212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:32:27.937220 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:32:27.937231 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:32:27.937259 systemd-journald[192]: Collecting audit messages is disabled. Sep 4 17:32:27.937281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:32:27.937290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:27.937299 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:32:27.937308 systemd-journald[192]: Journal started Sep 4 17:32:27.937325 systemd-journald[192]: Runtime Journal (/run/log/journal/e4259123870449929d9129e44f609d73) is 6.0M, max 48.3M, 42.3M free. Sep 4 17:32:27.941643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:32:27.943632 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:32:27.946474 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:32:27.948799 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:32:27.955286 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:32:27.960164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:27.963755 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:32:27.964372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:32:27.980910 dracut-cmdline[220]: dracut-dracut-053 Sep 4 17:32:27.983901 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:32:27.995671 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:32:27.998448 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:32:27.999478 kernel: Bridge firewalling registered Sep 4 17:32:28.001344 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:32:28.006898 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:32:28.017880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:32:28.029921 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:32:28.071069 systemd-resolved[267]: Positive Trust Anchors: Sep 4 17:32:28.071088 systemd-resolved[267]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:32:28.071133 systemd-resolved[267]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:32:28.074395 systemd-resolved[267]: Defaulting to hostname 'linux'. Sep 4 17:32:28.075727 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:32:28.080818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:32:28.089641 kernel: SCSI subsystem initialized Sep 4 17:32:28.100650 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:32:28.112648 kernel: iscsi: registered transport (tcp) Sep 4 17:32:28.143962 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:32:28.143990 kernel: QLogic iSCSI HBA Driver Sep 4 17:32:28.198203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:32:28.216857 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:32:28.247433 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:32:28.247516 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:32:28.247529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:32:28.292664 kernel: raid6: avx2x4 gen() 29299 MB/s Sep 4 17:32:28.309659 kernel: raid6: avx2x2 gen() 30972 MB/s Sep 4 17:32:28.326742 kernel: raid6: avx2x1 gen() 25987 MB/s Sep 4 17:32:28.326782 kernel: raid6: using algorithm avx2x2 gen() 30972 MB/s Sep 4 17:32:28.344811 kernel: raid6: .... xor() 18232 MB/s, rmw enabled Sep 4 17:32:28.344857 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:32:28.369649 kernel: xor: automatically using best checksumming function avx Sep 4 17:32:28.543652 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:32:28.554886 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:32:28.562795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:32:28.575257 systemd-udevd[411]: Using default interface naming scheme 'v255'. Sep 4 17:32:28.579908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:32:28.589755 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:32:28.603420 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 4 17:32:28.631612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:32:28.645731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:32:28.711274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:32:28.718816 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:32:28.733512 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:32:28.736326 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:32:28.739162 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:32:28.742478 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:32:28.748791 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:32:28.762208 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:32:28.768260 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:32:28.768477 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:32:28.772650 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:32:28.780973 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:32:28.781018 kernel: GPT:9289727 != 19775487 Sep 4 17:32:28.781034 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:32:28.781047 kernel: GPT:9289727 != 19775487 Sep 4 17:32:28.781059 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:32:28.781072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:32:28.780690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:32:28.780824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:28.787694 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:32:28.790508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:32:28.791724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:28.794112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:28.806349 kernel: libata version 3.00 loaded. Sep 4 17:32:28.806414 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:32:28.806436 kernel: AES CTR mode by8 optimization enabled Sep 4 17:32:28.806643 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:32:28.808669 kernel: scsi host0: ata_piix Sep 4 17:32:28.808906 kernel: scsi host1: ata_piix Sep 4 17:32:28.809800 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:32:28.810066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:28.813077 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:32:28.824646 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (456) Sep 4 17:32:28.828638 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Sep 4 17:32:28.837691 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:32:28.840553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:28.848108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:32:28.855923 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:32:28.859037 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:32:28.869824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:32:28.896853 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:32:28.899634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:32:28.901118 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:28.903810 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:28.913391 disk-uuid[540]: Primary Header is updated. Sep 4 17:32:28.913391 disk-uuid[540]: Secondary Entries is updated. Sep 4 17:32:28.913391 disk-uuid[540]: Secondary Header is updated. Sep 4 17:32:28.917251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:32:28.916730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:28.922642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:32:28.931886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:28.937772 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:32:28.962835 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:28.970665 kernel: ata2: found unknown device (class 0) Sep 4 17:32:28.971656 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:32:28.974712 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:32:29.032657 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:32:29.032991 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:32:29.049660 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:32:29.928504 disk-uuid[542]: The operation has completed successfully. Sep 4 17:32:29.930332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:32:29.957463 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:32:29.957600 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:32:29.984779 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:32:29.989962 sh[584]: Success Sep 4 17:32:30.004651 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:32:30.036287 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:32:30.049995 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:32:30.052816 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:32:30.063687 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:32:30.063719 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:32:30.063734 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:32:30.066111 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:32:30.066134 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:32:30.069508 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:32:30.070507 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:32:30.081750 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:32:30.083298 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:32:30.093099 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:32:30.093130 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:32:30.093141 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:32:30.096642 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:32:30.105437 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:32:30.107149 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:32:30.116200 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:32:30.125891 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:32:30.175472 ignition[687]: Ignition 2.18.0 Sep 4 17:32:30.176047 ignition[687]: Stage: fetch-offline Sep 4 17:32:30.176092 ignition[687]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:30.176102 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:30.176293 ignition[687]: parsed url from cmdline: "" Sep 4 17:32:30.176297 ignition[687]: no config URL provided Sep 4 17:32:30.176302 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:32:30.176317 ignition[687]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:32:30.176344 ignition[687]: op(1): [started] loading QEMU firmware config module Sep 4 17:32:30.176350 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:32:30.182995 ignition[687]: op(1): [finished] loading QEMU firmware config module Sep 4 17:32:30.192706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:32:30.208750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:32:30.226392 ignition[687]: parsing config with SHA512: 0304c704b5e36ff6dd4cd7b7d8442a9749c89e11d725295df13dcda1dca656ca40ab2e778046abdb25eba5696d714da24f25c8ff90ab27db29fb723e9dcc75fe Sep 4 17:32:30.231614 systemd-networkd[775]: lo: Link UP Sep 4 17:32:30.231636 systemd-networkd[775]: lo: Gained carrier Sep 4 17:32:30.233154 systemd-networkd[775]: Enumeration completed Sep 4 17:32:30.233486 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:32:30.233545 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:30.233549 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:32:30.234146 systemd[1]: Reached target network.target - Network. Sep 4 17:32:30.234847 systemd-networkd[775]: eth0: Link UP Sep 4 17:32:30.234850 systemd-networkd[775]: eth0: Gained carrier Sep 4 17:32:30.234857 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:30.243596 unknown[687]: fetched base config from "system" Sep 4 17:32:30.245643 unknown[687]: fetched user config from "qemu" Sep 4 17:32:30.247571 ignition[687]: fetch-offline: fetch-offline passed Sep 4 17:32:30.248443 ignition[687]: Ignition finished successfully Sep 4 17:32:30.249352 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:32:30.252787 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:32:30.255472 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:32:30.261950 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:32:30.275135 ignition[779]: Ignition 2.18.0 Sep 4 17:32:30.275145 ignition[779]: Stage: kargs Sep 4 17:32:30.275287 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:30.275298 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:30.276111 ignition[779]: kargs: kargs passed Sep 4 17:32:30.276155 ignition[779]: Ignition finished successfully Sep 4 17:32:30.280814 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:32:30.292849 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:32:30.304007 ignition[787]: Ignition 2.18.0 Sep 4 17:32:30.304017 ignition[787]: Stage: disks Sep 4 17:32:30.304170 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:30.304181 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:30.304989 ignition[787]: disks: disks passed Sep 4 17:32:30.305033 ignition[787]: Ignition finished successfully Sep 4 17:32:30.310568 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:32:30.311197 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:32:30.312880 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:32:30.314773 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:32:30.315110 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:32:30.315439 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:32:30.330745 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:32:30.346186 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:32:30.352941 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:32:30.371790 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:32:30.470650 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:32:30.471196 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:32:30.473504 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:32:30.485695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:32:30.488162 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:32:30.490572 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:32:30.490611 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:32:30.500412 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Sep 4 17:32:30.500436 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:32:30.500447 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:32:30.500458 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:32:30.490643 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:32:30.502662 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:32:30.502663 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:32:30.505477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:32:30.526739 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:32:30.558798 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:32:30.563847 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:32:30.567184 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:32:30.570436 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:32:30.646124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:32:30.658725 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:32:30.662118 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:32:30.666670 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:32:30.684960 ignition[920]: INFO : Ignition 2.18.0 Sep 4 17:32:30.684960 ignition[920]: INFO : Stage: mount Sep 4 17:32:30.686758 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:30.686758 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:30.686758 ignition[920]: INFO : mount: mount passed Sep 4 17:32:30.686758 ignition[920]: INFO : Ignition finished successfully Sep 4 17:32:30.688736 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:32:30.700695 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:32:30.701895 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:32:31.063140 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:32:31.076861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:32:31.084173 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (935) Sep 4 17:32:31.084201 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:32:31.085083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:32:31.085096 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:32:31.088643 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:32:31.089342 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:32:31.120821 ignition[952]: INFO : Ignition 2.18.0 Sep 4 17:32:31.120821 ignition[952]: INFO : Stage: files Sep 4 17:32:31.122586 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:31.122586 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:31.122586 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:32:31.126143 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:32:31.126143 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:32:31.126143 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:32:31.126143 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:32:31.126143 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:32:31.125328 unknown[952]: wrote ssh authorized keys file for user: core Sep 4 17:32:31.134095 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:32:31.134095 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:32:31.184188 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:32:31.257122 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:32:31.259294 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:32:31.261180 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:32:31.263117 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:32:31.265132 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:32:31.266985 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:32:31.268986 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:32:31.270919 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:32:31.272915 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:32:31.275097 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:32:31.277116 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:32:31.279127 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:32:31.281934 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:32:31.284721 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:32:31.287188 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Sep 4 17:32:31.639461 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:32:31.753058 systemd-networkd[775]: eth0: Gained IPv6LL Sep 4 17:32:32.052515 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Sep 4 17:32:32.052515 ignition[952]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:32:32.056213 ignition[952]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:32:32.058292 ignition[952]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:32:32.058292 ignition[952]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:32:32.058292 ignition[952]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:32:32.058292 ignition[952]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:32:32.058292 ignition[952]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:32:32.058292 ignition[952]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:32:32.058292 ignition[952]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:32:32.084452 ignition[952]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:32:32.089454 ignition[952]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:32:32.091411 ignition[952]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:32:32.091411 ignition[952]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:32:32.091411 ignition[952]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:32:32.096260 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:32:32.096260 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:32:32.096260 ignition[952]: INFO : files: files passed Sep 4 17:32:32.096260 ignition[952]: INFO : Ignition finished successfully Sep 4 17:32:32.104762 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:32:32.111921 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:32:32.114942 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:32:32.117843 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:32:32.118873 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:32:32.126024 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:32:32.130549 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:32:32.130549 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:32:32.133943 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:32:32.137281 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:32:32.138254 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:32:32.147802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:32:32.171647 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:32:32.172820 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:32:32.175666 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:32:32.177876 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:32:32.180169 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:32:32.194832 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:32:32.207348 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:32:32.211575 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:32:32.225827 systemd[1]: Stopped target network.target - Network. Sep 4 17:32:32.227793 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:32:32.230152 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:32:32.232543 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:32:32.234498 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:32:32.235585 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:32:32.238407 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:32:32.240539 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:32:32.242462 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:32:32.244811 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:32:32.247202 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:32:32.249537 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:32:32.251770 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:32:32.254531 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:32:32.256724 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:32:32.258905 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:32:32.260525 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:32:32.261615 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:32:32.263964 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:32:32.266178 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:32:32.268513 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:32:32.269531 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:32:32.272162 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:32:32.273217 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:32:32.275453 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:32:32.276547 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:32:32.279003 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:32:32.280819 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:32:32.285665 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:32:32.288390 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:32:32.290242 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:32:32.292105 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:32:32.293005 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:32:32.294962 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:32:32.295870 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:32:32.297921 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:32:32.299109 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:32:32.301653 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:32:32.302650 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:32:32.318819 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:32:32.321602 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:32:32.323668 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:32:32.325966 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:32:32.328177 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:32:32.329524 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:32:32.330139 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:32:32.330268 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:32:32.334856 ignition[1008]: INFO : Ignition 2.18.0 Sep 4 17:32:32.334856 ignition[1008]: INFO : Stage: umount Sep 4 17:32:32.334856 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:32.334856 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:32.334856 ignition[1008]: INFO : umount: umount passed Sep 4 17:32:32.334856 ignition[1008]: INFO : Ignition finished successfully Sep 4 17:32:32.334967 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:32:32.335096 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:32:32.337119 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:32:32.337231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:32:32.338976 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:32:32.339044 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:32:32.339690 systemd-networkd[775]: eth0: DHCPv6 lease lost Sep 4 17:32:32.340362 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:32:32.340419 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:32:32.343781 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:32:32.343830 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:32:32.344316 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:32:32.344356 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:32:32.347243 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:32:32.347360 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:32:32.350562 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:32:32.350724 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:32:32.354248 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:32:32.354301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:32:32.360901 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:32:32.361169 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:32:32.361225 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:32:32.361558 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:32:32.361607 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:32:32.366176 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:32:32.366230 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:32:32.368098 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:32:32.368152 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:32:32.368535 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:32:32.380305 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:32:32.380447 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:32:32.390359 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:32:32.390544 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:32:32.392872 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:32:32.392924 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:32:32.393167 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:32:32.393203 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:32:32.393506 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:32:32.393554 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:32:32.399926 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:32:32.399974 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:32:32.402723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:32:32.402785 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:32.411792 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:32:32.412199 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:32:32.412253 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:32:32.414386 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:32:32.414441 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:32:32.414901 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:32:32.414946 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:32:32.415241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:32:32.415284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:32.419294 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:32:32.419410 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:32:32.443899 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:32:32.642903 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:32:32.643037 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:32:32.645522 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:32:32.647782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:32:32.647842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:32:32.663931 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:32:32.674255 systemd[1]: Switching root. Sep 4 17:32:32.708019 systemd-journald[192]: Journal stopped Sep 4 17:32:34.057981 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 4 17:32:34.058044 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:32:34.058058 kernel: SELinux: policy capability open_perms=1 Sep 4 17:32:34.058073 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:32:34.058084 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:32:34.058096 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:32:34.058107 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:32:34.058120 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:32:34.058136 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:32:34.058151 kernel: audit: type=1403 audit(1725471153.158:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:32:34.058171 systemd[1]: Successfully loaded SELinux policy in 52.189ms. Sep 4 17:32:34.058194 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.585ms. Sep 4 17:32:34.058214 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:32:34.058226 systemd[1]: Detected virtualization kvm. Sep 4 17:32:34.058239 systemd[1]: Detected architecture x86-64. Sep 4 17:32:34.058250 systemd[1]: Detected first boot. Sep 4 17:32:34.058262 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:32:34.058274 zram_generator::config[1053]: No configuration found. Sep 4 17:32:34.058287 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:32:34.058299 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:32:34.058313 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:32:34.058325 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:32:34.058348 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:32:34.058362 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:32:34.058374 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:32:34.058386 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:32:34.058405 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:32:34.058424 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:32:34.058442 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:32:34.058461 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:32:34.058478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:32:34.058492 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:32:34.058505 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:32:34.058516 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:32:34.058529 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:32:34.058541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:32:34.058553 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:32:34.058565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:32:34.058580 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:32:34.058591 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:32:34.058603 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:32:34.058616 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:32:34.058640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:32:34.058652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:32:34.058664 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:32:34.058677 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:32:34.058692 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:32:34.058704 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:32:34.058716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:32:34.058740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:32:34.058752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:32:34.058765 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:32:34.058777 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:32:34.058790 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:32:34.058802 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:32:34.058817 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:32:34.058833 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:32:34.058849 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:32:34.058865 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:32:34.058878 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:32:34.058891 systemd[1]: Reached target machines.target - Containers. Sep 4 17:32:34.058903 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:32:34.058915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:32:34.058930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:32:34.058942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:32:34.058954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:32:34.060112 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:32:34.060128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:32:34.060140 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:32:34.060152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:32:34.060170 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:32:34.060187 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:32:34.060204 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:32:34.060220 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:32:34.060236 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:32:34.060249 kernel: fuse: init (API version 7.39) Sep 4 17:32:34.060261 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:32:34.060272 kernel: loop: module loaded Sep 4 17:32:34.060284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:32:34.060314 systemd-journald[1120]: Collecting audit messages is disabled. Sep 4 17:32:34.060339 kernel: ACPI: bus type drm_connector registered Sep 4 17:32:34.060351 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:32:34.060364 systemd-journald[1120]: Journal started Sep 4 17:32:34.060394 systemd-journald[1120]: Runtime Journal (/run/log/journal/e4259123870449929d9129e44f609d73) is 6.0M, max 48.3M, 42.3M free. Sep 4 17:32:33.788250 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:32:33.818466 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:32:33.819053 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:32:34.063662 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:32:34.076964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:32:34.079654 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:32:34.079712 systemd[1]: Stopped verity-setup.service. Sep 4 17:32:34.082654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:32:34.086680 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:32:34.088079 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:32:34.089282 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:32:34.090563 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:32:34.091693 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:32:34.098956 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:32:34.100199 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:32:34.101550 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:32:34.103229 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:32:34.103421 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:32:34.104953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:32:34.105116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:32:34.106648 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:32:34.106833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:32:34.108222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:32:34.108389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:32:34.110169 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:32:34.110364 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:32:34.112094 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:32:34.112294 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:32:34.113797 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:32:34.115352 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:32:34.116923 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:32:34.132303 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:32:34.147786 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:32:34.151764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:32:34.153772 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:32:34.153809 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:32:34.155938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:32:34.159928 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:32:34.170087 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:32:34.171283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:32:34.174135 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:32:34.177813 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:32:34.185159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:32:34.187139 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:32:34.188465 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:32:34.194742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:32:34.199894 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:32:34.207186 systemd-journald[1120]: Time spent on flushing to /var/log/journal/e4259123870449929d9129e44f609d73 is 18.217ms for 991 entries. Sep 4 17:32:34.207186 systemd-journald[1120]: System Journal (/var/log/journal/e4259123870449929d9129e44f609d73) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:32:34.382710 systemd-journald[1120]: Received client request to flush runtime journal. Sep 4 17:32:34.382814 kernel: loop0: detected capacity change from 0 to 139904 Sep 4 17:32:34.382859 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:32:34.383004 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:32:34.383034 kernel: loop1: detected capacity change from 0 to 210664 Sep 4 17:32:34.383059 kernel: loop2: detected capacity change from 0 to 80568 Sep 4 17:32:34.211370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:32:34.214671 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:32:34.216177 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:32:34.217609 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:32:34.224223 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:32:34.235828 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:32:34.245060 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:32:34.255477 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:32:34.267674 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Sep 4 17:32:34.267688 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Sep 4 17:32:34.270515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:32:34.274810 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:32:34.288842 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:32:34.290355 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:32:34.292616 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:32:34.303910 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:32:34.338696 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:32:34.349931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:32:34.368612 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Sep 4 17:32:34.368647 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Sep 4 17:32:34.374402 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:32:34.384566 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:32:34.407777 kernel: loop3: detected capacity change from 0 to 139904 Sep 4 17:32:34.419667 kernel: loop4: detected capacity change from 0 to 210664 Sep 4 17:32:34.449672 kernel: loop5: detected capacity change from 0 to 80568 Sep 4 17:32:34.456878 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:32:34.457851 (sd-merge)[1193]: Merged extensions into '/usr'. Sep 4 17:32:34.463232 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:32:34.463251 systemd[1]: Reloading... Sep 4 17:32:34.531832 zram_generator::config[1216]: No configuration found. Sep 4 17:32:34.603953 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:32:34.673043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:32:34.728501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:32:34.729032 systemd[1]: Reloading finished in 265 ms. Sep 4 17:32:34.761342 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:32:34.762992 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:32:34.764698 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:32:34.778848 systemd[1]: Starting ensure-sysext.service... Sep 4 17:32:34.781564 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:32:34.794225 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:32:34.794247 systemd[1]: Reloading... Sep 4 17:32:34.808012 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:32:34.808449 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:32:34.809533 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:32:34.810339 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 4 17:32:34.810496 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 4 17:32:34.814397 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:32:34.814523 systemd-tmpfiles[1259]: Skipping /boot Sep 4 17:32:34.825090 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:32:34.825105 systemd-tmpfiles[1259]: Skipping /boot Sep 4 17:32:34.856639 zram_generator::config[1284]: No configuration found. Sep 4 17:32:34.980032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:32:35.034430 systemd[1]: Reloading finished in 239 ms. Sep 4 17:32:35.056733 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:32:35.064209 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:32:35.073504 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:32:35.076532 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:32:35.079826 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:32:35.084829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:32:35.091770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:32:35.096948 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:32:35.101330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:32:35.101549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:32:35.103345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:32:35.107445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:32:35.112729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:32:35.115381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:32:35.118525 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:32:35.119849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:32:35.121346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:32:35.121572 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:32:35.123674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:32:35.123884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:32:35.128170 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:32:35.128376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:32:35.133502 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Sep 4 17:32:35.138575 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:32:35.144844 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:32:35.153676 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:32:35.154005 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:32:35.154941 augenrules[1352]: No rules Sep 4 17:32:35.162376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:32:35.165728 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:32:35.168676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:32:35.175083 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:32:35.177502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:32:35.180539 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:32:35.182318 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:32:35.183578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:32:35.187079 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:32:35.190003 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:32:35.192252 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:32:35.195178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:32:35.195513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:32:35.198212 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:32:35.199305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:32:35.211148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:32:35.211388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:32:35.214325 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:32:35.214519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:32:35.216668 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:32:35.226686 systemd[1]: Finished ensure-sysext.service. Sep 4 17:32:35.247645 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1378) Sep 4 17:32:35.246249 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:32:35.264209 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1380) Sep 4 17:32:35.259851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:32:35.261471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:32:35.261563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:32:35.274957 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:32:35.276614 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:32:35.308759 systemd-resolved[1327]: Positive Trust Anchors: Sep 4 17:32:35.308779 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:32:35.308812 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:32:35.313908 systemd-resolved[1327]: Defaulting to hostname 'linux'. Sep 4 17:32:35.315513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:32:35.317091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:32:35.329005 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:32:35.337658 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Sep 4 17:32:35.342825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:32:35.357848 systemd-networkd[1397]: lo: Link UP Sep 4 17:32:35.358015 systemd-networkd[1397]: lo: Gained carrier Sep 4 17:32:35.359729 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 4 17:32:35.359765 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 17:32:35.360752 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:32:35.364776 systemd-networkd[1397]: Enumeration completed Sep 4 17:32:35.364873 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:32:35.365397 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:35.365401 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:32:35.366197 systemd[1]: Reached target network.target - Network. Sep 4 17:32:35.367244 systemd-networkd[1397]: eth0: Link UP Sep 4 17:32:35.367295 systemd-networkd[1397]: eth0: Gained carrier Sep 4 17:32:35.367360 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:35.373683 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:32:35.373773 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:32:35.378424 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:32:35.380847 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:32:35.383725 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.157/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:32:35.385976 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Sep 4 17:32:36.717685 systemd-resolved[1327]: Clock change detected. Flushing caches. Sep 4 17:32:36.719031 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:32:36.719148 systemd-timesyncd[1401]: Initial clock synchronization to Wed 2024-09-04 17:32:36.717637 UTC. Sep 4 17:32:36.743901 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:32:36.744115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:36.747206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:32:36.747430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:36.814864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:36.831194 kernel: kvm_amd: TSC scaling supported Sep 4 17:32:36.831262 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:32:36.831283 kernel: kvm_amd: Nested Paging enabled Sep 4 17:32:36.831296 kernel: kvm_amd: LBR virtualization supported Sep 4 17:32:36.832284 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:32:36.832311 kernel: kvm_amd: Virtual GIF supported Sep 4 17:32:36.852909 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:32:36.881929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:36.896972 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:32:36.909959 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:32:36.919031 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:32:36.950085 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:32:36.951856 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:32:36.953234 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:32:36.954659 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:32:36.956026 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:32:36.957659 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:32:36.959039 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:32:36.960476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:32:36.961840 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:32:36.961880 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:32:36.962912 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:32:36.964729 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:32:36.967895 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:32:36.990523 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:32:36.992978 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:32:36.994624 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:32:36.995770 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:32:36.996728 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:32:36.997682 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:32:36.997711 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:32:36.998658 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:32:37.000662 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:32:37.002913 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:32:37.006974 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:32:37.007406 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:32:37.010020 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:32:37.013537 jq[1432]: false Sep 4 17:32:37.013932 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:32:37.016934 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:32:37.020680 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:32:37.024017 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:32:37.031963 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:32:37.033496 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:32:37.033968 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:32:37.034605 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:32:37.035326 dbus-daemon[1431]: [system] SELinux support is enabled Sep 4 17:32:37.037047 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:32:37.038524 extend-filesystems[1433]: Found loop3 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found loop4 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found loop5 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found sr0 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda1 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda2 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda3 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found usr Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda4 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda6 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda7 Sep 4 17:32:37.039487 extend-filesystems[1433]: Found vda9 Sep 4 17:32:37.039487 extend-filesystems[1433]: Checking size of /dev/vda9 Sep 4 17:32:37.066700 extend-filesystems[1433]: Resized partition /dev/vda9 Sep 4 17:32:37.070781 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:32:37.070805 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1383) Sep 4 17:32:37.043772 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:32:37.070970 extend-filesystems[1454]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:32:37.051424 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:32:37.073637 update_engine[1445]: I0904 17:32:37.069595 1445 main.cc:92] Flatcar Update Engine starting Sep 4 17:32:37.055803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:32:37.073966 jq[1446]: true Sep 4 17:32:37.074196 update_engine[1445]: I0904 17:32:37.073798 1445 update_check_scheduler.cc:74] Next update check in 6m51s Sep 4 17:32:37.056033 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:32:37.056346 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:32:37.056565 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:32:37.070983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:32:37.072457 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:32:37.094344 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:32:37.104908 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:32:37.110327 tar[1455]: linux-amd64/helm Sep 4 17:32:37.123614 jq[1457]: true Sep 4 17:32:37.123802 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:32:37.124432 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:32:37.124688 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:32:37.125796 systemd-logind[1444]: New seat seat0. Sep 4 17:32:37.128297 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:32:37.134580 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:32:37.134730 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:32:37.136124 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:32:37.136232 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:32:37.137961 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:32:37.137961 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:32:37.137961 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:32:37.143657 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Sep 4 17:32:37.151203 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:32:37.153883 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:32:37.156629 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:32:37.156841 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:32:37.160868 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:32:37.165569 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:32:37.184179 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:32:37.318043 containerd[1460]: time="2024-09-04T17:32:37.317936224Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:32:37.342243 containerd[1460]: time="2024-09-04T17:32:37.342176789Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:32:37.342308 containerd[1460]: time="2024-09-04T17:32:37.342268411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:37.344192 containerd[1460]: time="2024-09-04T17:32:37.344151425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:37.344192 containerd[1460]: time="2024-09-04T17:32:37.344188855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:37.344477 containerd[1460]: time="2024-09-04T17:32:37.344450295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:37.344508 containerd[1460]: time="2024-09-04T17:32:37.344475823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:32:37.344602 containerd[1460]: time="2024-09-04T17:32:37.344580259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:37.344674 containerd[1460]: time="2024-09-04T17:32:37.344652856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:37.344696 containerd[1460]: time="2024-09-04T17:32:37.344675147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:37.344854 containerd[1460]: time="2024-09-04T17:32:37.344779944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:37.345094 containerd[1460]: time="2024-09-04T17:32:37.345066241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:37.345120 containerd[1460]: time="2024-09-04T17:32:37.345096308Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:32:37.345120 containerd[1460]: time="2024-09-04T17:32:37.345109973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:37.345262 containerd[1460]: time="2024-09-04T17:32:37.345236841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:37.345262 containerd[1460]: time="2024-09-04T17:32:37.345256799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:32:37.345850 containerd[1460]: time="2024-09-04T17:32:37.345323805Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:32:37.345850 containerd[1460]: time="2024-09-04T17:32:37.345340065Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:32:37.350811 containerd[1460]: time="2024-09-04T17:32:37.350675527Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:32:37.350811 containerd[1460]: time="2024-09-04T17:32:37.350705333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:32:37.350811 containerd[1460]: time="2024-09-04T17:32:37.350717846Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:32:37.350811 containerd[1460]: time="2024-09-04T17:32:37.350747742Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:32:37.350811 containerd[1460]: time="2024-09-04T17:32:37.350762971Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:32:37.350811 containerd[1460]: time="2024-09-04T17:32:37.350774442Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:32:37.350811 containerd[1460]: time="2024-09-04T17:32:37.350787457Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.350937679Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.350954791Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.350967765Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.350980990Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.350994595Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.351011467Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.351024111Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351039 containerd[1460]: time="2024-09-04T17:32:37.351036254Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351233 containerd[1460]: time="2024-09-04T17:32:37.351127284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351233 containerd[1460]: time="2024-09-04T17:32:37.351142102Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351233 containerd[1460]: time="2024-09-04T17:32:37.351155016Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351233 containerd[1460]: time="2024-09-04T17:32:37.351166368Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:32:37.351327 containerd[1460]: time="2024-09-04T17:32:37.351280281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:32:37.351640 containerd[1460]: time="2024-09-04T17:32:37.351596495Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:32:37.351640 containerd[1460]: time="2024-09-04T17:32:37.351639626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351715 containerd[1460]: time="2024-09-04T17:32:37.351654944Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:32:37.351715 containerd[1460]: time="2024-09-04T17:32:37.351676114Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:32:37.351766 containerd[1460]: time="2024-09-04T17:32:37.351736608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351766 containerd[1460]: time="2024-09-04T17:32:37.351749913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351766 containerd[1460]: time="2024-09-04T17:32:37.351763699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351936 containerd[1460]: time="2024-09-04T17:32:37.351775591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351936 containerd[1460]: time="2024-09-04T17:32:37.351852936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351936 containerd[1460]: time="2024-09-04T17:32:37.351866261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351936 containerd[1460]: time="2024-09-04T17:32:37.351886940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351936 containerd[1460]: time="2024-09-04T17:32:37.351898291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.351936 containerd[1460]: time="2024-09-04T17:32:37.351910514Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:32:37.352086 containerd[1460]: time="2024-09-04T17:32:37.352057660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352086 containerd[1460]: time="2024-09-04T17:32:37.352076085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352136 containerd[1460]: time="2024-09-04T17:32:37.352088007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352136 containerd[1460]: time="2024-09-04T17:32:37.352103075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352136 containerd[1460]: time="2024-09-04T17:32:37.352115699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352136 containerd[1460]: time="2024-09-04T17:32:37.352129425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352234 containerd[1460]: time="2024-09-04T17:32:37.352141918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352234 containerd[1460]: time="2024-09-04T17:32:37.352154482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:32:37.352447 containerd[1460]: time="2024-09-04T17:32:37.352392328Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:32:37.352447 containerd[1460]: time="2024-09-04T17:32:37.352444817Z" level=info msg="Connect containerd service" Sep 4 17:32:37.352638 containerd[1460]: time="2024-09-04T17:32:37.352464744Z" level=info msg="using legacy CRI server" Sep 4 17:32:37.352638 containerd[1460]: time="2024-09-04T17:32:37.352472178Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:32:37.352638 containerd[1460]: time="2024-09-04T17:32:37.352553340Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:32:37.353279 containerd[1460]: time="2024-09-04T17:32:37.353218418Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:32:37.353279 containerd[1460]: time="2024-09-04T17:32:37.353267681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:32:37.353347 containerd[1460]: time="2024-09-04T17:32:37.353285554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:32:37.353375 containerd[1460]: time="2024-09-04T17:32:37.353296916Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:32:37.353413 containerd[1460]: time="2024-09-04T17:32:37.353373018Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:32:37.353457 containerd[1460]: time="2024-09-04T17:32:37.353337371Z" level=info msg="Start subscribing containerd event" Sep 4 17:32:37.353484 containerd[1460]: time="2024-09-04T17:32:37.353471734Z" level=info msg="Start recovering state" Sep 4 17:32:37.353634 containerd[1460]: time="2024-09-04T17:32:37.353608310Z" level=info msg="Start event monitor" Sep 4 17:32:37.353634 containerd[1460]: time="2024-09-04T17:32:37.353624761Z" level=info msg="Start snapshots syncer" Sep 4 17:32:37.353990 containerd[1460]: time="2024-09-04T17:32:37.353954008Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:32:37.354024 containerd[1460]: time="2024-09-04T17:32:37.353990607Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:32:37.354024 containerd[1460]: time="2024-09-04T17:32:37.354000155Z" level=info msg="Start streaming server" Sep 4 17:32:37.354223 containerd[1460]: time="2024-09-04T17:32:37.354172118Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:32:37.357310 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:32:37.358657 containerd[1460]: time="2024-09-04T17:32:37.358087084Z" level=info msg="containerd successfully booted in 0.042116s" Sep 4 17:32:37.364382 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:32:37.391249 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:32:37.404291 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:32:37.410670 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:32:37.410987 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:32:37.414986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:32:37.432094 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:32:37.440246 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:32:37.442530 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:32:37.444053 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:32:37.537745 tar[1455]: linux-amd64/LICENSE Sep 4 17:32:37.537889 tar[1455]: linux-amd64/README.md Sep 4 17:32:37.565324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:32:38.330043 systemd-networkd[1397]: eth0: Gained IPv6LL Sep 4 17:32:38.333185 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:32:38.335013 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:32:38.347040 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:32:38.349307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:38.351417 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:32:38.372858 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:32:38.373149 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:32:38.375238 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:32:38.377686 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:32:39.029983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:39.031876 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:32:39.033234 systemd[1]: Startup finished in 837ms (kernel) + 5.436s (initrd) + 4.595s (userspace) = 10.869s. Sep 4 17:32:39.038214 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:32:39.488522 kubelet[1544]: E0904 17:32:39.488396 1544 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:32:39.492516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:32:39.492706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:32:39.493078 systemd[1]: kubelet.service: Consumed 1.008s CPU time. Sep 4 17:32:46.874145 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:32:46.875517 systemd[1]: Started sshd@0-10.0.0.157:22-10.0.0.1:58984.service - OpenSSH per-connection server daemon (10.0.0.1:58984). Sep 4 17:32:46.921004 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 58984 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:32:46.922803 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:46.931156 systemd-logind[1444]: New session 1 of user core. Sep 4 17:32:46.932636 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:32:46.949020 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:32:46.961171 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:32:46.963988 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:32:46.972213 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:47.090361 systemd[1563]: Queued start job for default target default.target. Sep 4 17:32:47.104107 systemd[1563]: Created slice app.slice - User Application Slice. Sep 4 17:32:47.104133 systemd[1563]: Reached target paths.target - Paths. Sep 4 17:32:47.104146 systemd[1563]: Reached target timers.target - Timers. Sep 4 17:32:47.105678 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:32:47.117084 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:32:47.117211 systemd[1563]: Reached target sockets.target - Sockets. Sep 4 17:32:47.117229 systemd[1563]: Reached target basic.target - Basic System. Sep 4 17:32:47.117266 systemd[1563]: Reached target default.target - Main User Target. Sep 4 17:32:47.117298 systemd[1563]: Startup finished in 138ms. Sep 4 17:32:47.117813 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:32:47.119336 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:32:47.178800 systemd[1]: Started sshd@1-10.0.0.157:22-10.0.0.1:58988.service - OpenSSH per-connection server daemon (10.0.0.1:58988). Sep 4 17:32:47.217053 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 58988 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:32:47.218733 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:47.222654 systemd-logind[1444]: New session 2 of user core. Sep 4 17:32:47.231938 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:32:47.286146 sshd[1574]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:47.297489 systemd[1]: sshd@1-10.0.0.157:22-10.0.0.1:58988.service: Deactivated successfully. Sep 4 17:32:47.299060 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:32:47.300734 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:32:47.301933 systemd[1]: Started sshd@2-10.0.0.157:22-10.0.0.1:58990.service - OpenSSH per-connection server daemon (10.0.0.1:58990). Sep 4 17:32:47.302585 systemd-logind[1444]: Removed session 2. Sep 4 17:32:47.337718 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:32:47.339207 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:47.343268 systemd-logind[1444]: New session 3 of user core. Sep 4 17:32:47.351965 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:32:47.403137 sshd[1581]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:47.420529 systemd[1]: sshd@2-10.0.0.157:22-10.0.0.1:58990.service: Deactivated successfully. Sep 4 17:32:47.423108 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:32:47.425344 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:32:47.438180 systemd[1]: Started sshd@3-10.0.0.157:22-10.0.0.1:59000.service - OpenSSH per-connection server daemon (10.0.0.1:59000). Sep 4 17:32:47.439494 systemd-logind[1444]: Removed session 3. Sep 4 17:32:47.469656 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 59000 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:32:47.471205 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:47.475694 systemd-logind[1444]: New session 4 of user core. Sep 4 17:32:47.495095 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:32:47.551207 sshd[1588]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:47.562702 systemd[1]: sshd@3-10.0.0.157:22-10.0.0.1:59000.service: Deactivated successfully. Sep 4 17:32:47.564310 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:32:47.565998 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:32:47.567223 systemd[1]: Started sshd@4-10.0.0.157:22-10.0.0.1:59004.service - OpenSSH per-connection server daemon (10.0.0.1:59004). Sep 4 17:32:47.567880 systemd-logind[1444]: Removed session 4. Sep 4 17:32:47.603382 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 59004 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:32:47.604889 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:47.608817 systemd-logind[1444]: New session 5 of user core. Sep 4 17:32:47.620024 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:32:47.809216 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:32:47.809600 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:47.836317 sudo[1598]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:47.838636 sshd[1595]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:47.851029 systemd[1]: sshd@4-10.0.0.157:22-10.0.0.1:59004.service: Deactivated successfully. Sep 4 17:32:47.853254 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:32:47.855321 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:32:47.866059 systemd[1]: Started sshd@5-10.0.0.157:22-10.0.0.1:59012.service - OpenSSH per-connection server daemon (10.0.0.1:59012). Sep 4 17:32:47.866877 systemd-logind[1444]: Removed session 5. Sep 4 17:32:47.897499 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 59012 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:32:47.898991 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:47.903025 systemd-logind[1444]: New session 6 of user core. Sep 4 17:32:47.909932 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:32:47.962864 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:32:47.963249 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:47.966672 sudo[1607]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:47.972123 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:32:47.972461 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:47.990028 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:32:47.991529 auditctl[1610]: No rules Sep 4 17:32:47.992767 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:32:47.993015 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:32:47.994684 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:32:48.022669 augenrules[1628]: No rules Sep 4 17:32:48.024384 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:32:48.025606 sudo[1606]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:48.027471 sshd[1603]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:48.037525 systemd[1]: sshd@5-10.0.0.157:22-10.0.0.1:59012.service: Deactivated successfully. Sep 4 17:32:48.039166 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:32:48.040583 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:32:48.042024 systemd[1]: Started sshd@6-10.0.0.157:22-10.0.0.1:59018.service - OpenSSH per-connection server daemon (10.0.0.1:59018). Sep 4 17:32:48.042842 systemd-logind[1444]: Removed session 6. Sep 4 17:32:48.076758 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 59018 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:32:48.078119 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:48.081712 systemd-logind[1444]: New session 7 of user core. Sep 4 17:32:48.091949 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:32:48.143046 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:32:48.143324 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:48.237019 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:32:48.237182 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:32:48.468432 dockerd[1649]: time="2024-09-04T17:32:48.468306704Z" level=info msg="Starting up" Sep 4 17:32:48.520597 dockerd[1649]: time="2024-09-04T17:32:48.520535367Z" level=info msg="Loading containers: start." Sep 4 17:32:48.642890 kernel: Initializing XFRM netlink socket Sep 4 17:32:48.738321 systemd-networkd[1397]: docker0: Link UP Sep 4 17:32:48.771795 dockerd[1649]: time="2024-09-04T17:32:48.771739484Z" level=info msg="Loading containers: done." Sep 4 17:32:48.821154 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2237648951-merged.mount: Deactivated successfully. Sep 4 17:32:48.822416 dockerd[1649]: time="2024-09-04T17:32:48.822366241Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:32:48.822857 dockerd[1649]: time="2024-09-04T17:32:48.822814713Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:32:48.822991 dockerd[1649]: time="2024-09-04T17:32:48.822968361Z" level=info msg="Daemon has completed initialization" Sep 4 17:32:48.853674 dockerd[1649]: time="2024-09-04T17:32:48.853596863Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:32:48.853864 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:32:49.497430 containerd[1460]: time="2024-09-04T17:32:49.497387861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:32:49.743021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:32:49.752333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:49.915323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:49.920268 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:32:50.558742 kubelet[1799]: E0904 17:32:50.558658 1799 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:32:50.566428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:32:50.566658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:32:51.925744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902687525.mount: Deactivated successfully. Sep 4 17:32:52.890990 containerd[1460]: time="2024-09-04T17:32:52.890897899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:52.891944 containerd[1460]: time="2024-09-04T17:32:52.891860465Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=32772416" Sep 4 17:32:52.893278 containerd[1460]: time="2024-09-04T17:32:52.893232710Z" level=info msg="ImageCreate event name:\"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:52.896243 containerd[1460]: time="2024-09-04T17:32:52.896183337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:52.897667 containerd[1460]: time="2024-09-04T17:32:52.897439504Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"32769216\" in 3.400008622s" Sep 4 17:32:52.897667 containerd[1460]: time="2024-09-04T17:32:52.897503244Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:8a97b1fb3e2ebd03bf97ce8ae894b3dc8a68ab1f4ecfd0a284921c45c56f5aa4\"" Sep 4 17:32:52.919096 containerd[1460]: time="2024-09-04T17:32:52.919066906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:32:54.761067 containerd[1460]: time="2024-09-04T17:32:54.760995704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:54.761741 containerd[1460]: time="2024-09-04T17:32:54.761707860Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=29594065" Sep 4 17:32:54.763039 containerd[1460]: time="2024-09-04T17:32:54.763008230Z" level=info msg="ImageCreate event name:\"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:54.765809 containerd[1460]: time="2024-09-04T17:32:54.765770774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:54.766758 containerd[1460]: time="2024-09-04T17:32:54.766725545Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"31144011\" in 1.847623483s" Sep 4 17:32:54.766795 containerd[1460]: time="2024-09-04T17:32:54.766756744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:8398ad49a121d58ecf8a36e8371c0928fdf75eb0a83d28232ab2b39b1c6a9050\"" Sep 4 17:32:54.790603 containerd[1460]: time="2024-09-04T17:32:54.790559428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:32:55.753301 containerd[1460]: time="2024-09-04T17:32:55.753253144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:55.753939 containerd[1460]: time="2024-09-04T17:32:55.753874229Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=17780233" Sep 4 17:32:55.754949 containerd[1460]: time="2024-09-04T17:32:55.754922857Z" level=info msg="ImageCreate event name:\"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:55.757638 containerd[1460]: time="2024-09-04T17:32:55.757582648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:55.758655 containerd[1460]: time="2024-09-04T17:32:55.758619934Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"19330197\" in 968.025039ms" Sep 4 17:32:55.758655 containerd[1460]: time="2024-09-04T17:32:55.758650852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:4939f82ab9ab456e782c06ed37b245127c8a9ac29a72982346a7160f18107833\"" Sep 4 17:32:55.780194 containerd[1460]: time="2024-09-04T17:32:55.780160934Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:32:56.727426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037344757.mount: Deactivated successfully. Sep 4 17:32:57.296215 containerd[1460]: time="2024-09-04T17:32:57.296151241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:57.296919 containerd[1460]: time="2024-09-04T17:32:57.296878476Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=29037161" Sep 4 17:32:57.298017 containerd[1460]: time="2024-09-04T17:32:57.297979692Z" level=info msg="ImageCreate event name:\"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:57.300184 containerd[1460]: time="2024-09-04T17:32:57.300141719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:57.300746 containerd[1460]: time="2024-09-04T17:32:57.300712350Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"29036180\" in 1.520515909s" Sep 4 17:32:57.300802 containerd[1460]: time="2024-09-04T17:32:57.300747365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:568d5ba88d944bcd67415d8c358fce615824410f3a43bab2b353336bc3795a10\"" Sep 4 17:32:57.322267 containerd[1460]: time="2024-09-04T17:32:57.322208245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:32:57.878177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649445339.mount: Deactivated successfully. Sep 4 17:32:58.545220 containerd[1460]: time="2024-09-04T17:32:58.545149547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:58.546121 containerd[1460]: time="2024-09-04T17:32:58.546057441Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Sep 4 17:32:58.547417 containerd[1460]: time="2024-09-04T17:32:58.547387246Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:58.551694 containerd[1460]: time="2024-09-04T17:32:58.551202695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:58.552955 containerd[1460]: time="2024-09-04T17:32:58.552888739Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.230633085s" Sep 4 17:32:58.552955 containerd[1460]: time="2024-09-04T17:32:58.552942059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Sep 4 17:32:58.575494 containerd[1460]: time="2024-09-04T17:32:58.575464621Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:32:59.039518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498095749.mount: Deactivated successfully. Sep 4 17:32:59.045689 containerd[1460]: time="2024-09-04T17:32:59.045642286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:59.046436 containerd[1460]: time="2024-09-04T17:32:59.046382915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:32:59.047712 containerd[1460]: time="2024-09-04T17:32:59.047687934Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:59.049891 containerd[1460]: time="2024-09-04T17:32:59.049865540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:59.050590 containerd[1460]: time="2024-09-04T17:32:59.050568840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 474.908231ms" Sep 4 17:32:59.050647 containerd[1460]: time="2024-09-04T17:32:59.050594719Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:32:59.072963 containerd[1460]: time="2024-09-04T17:32:59.072922786Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:32:59.744745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054932979.mount: Deactivated successfully. Sep 4 17:33:00.816872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:33:00.824036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:33:00.968007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:33:00.972561 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:33:01.007078 kubelet[1997]: E0904 17:33:01.007025 1997 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:33:01.011742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:33:01.011968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:33:02.480909 containerd[1460]: time="2024-09-04T17:33:02.480854004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:02.481805 containerd[1460]: time="2024-09-04T17:33:02.481769352Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Sep 4 17:33:02.483318 containerd[1460]: time="2024-09-04T17:33:02.483269747Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:02.486286 containerd[1460]: time="2024-09-04T17:33:02.486240111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:02.487393 containerd[1460]: time="2024-09-04T17:33:02.487364661Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.414403263s" Sep 4 17:33:02.487432 containerd[1460]: time="2024-09-04T17:33:02.487397042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Sep 4 17:33:05.060807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:33:05.073018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:33:05.089423 systemd[1]: Reloading requested from client PID 2102 ('systemctl') (unit session-7.scope)... Sep 4 17:33:05.089438 systemd[1]: Reloading... Sep 4 17:33:05.171884 zram_generator::config[2142]: No configuration found. Sep 4 17:33:05.461325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:33:05.536337 systemd[1]: Reloading finished in 446 ms. Sep 4 17:33:05.601122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:33:05.605034 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:33:05.605314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:33:05.606963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:33:05.748917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:33:05.754361 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:33:05.790940 kubelet[2189]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:33:05.790940 kubelet[2189]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:33:05.790940 kubelet[2189]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:33:05.791331 kubelet[2189]: I0904 17:33:05.790997 2189 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:33:06.216601 kubelet[2189]: I0904 17:33:06.216494 2189 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:33:06.216601 kubelet[2189]: I0904 17:33:06.216521 2189 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:33:06.216778 kubelet[2189]: I0904 17:33:06.216755 2189 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:33:06.230873 kubelet[2189]: E0904 17:33:06.230845 2189 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.157:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.231545 kubelet[2189]: I0904 17:33:06.231511 2189 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:33:06.242183 kubelet[2189]: I0904 17:33:06.242163 2189 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:33:06.243136 kubelet[2189]: I0904 17:33:06.243094 2189 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:33:06.243279 kubelet[2189]: I0904 17:33:06.243129 2189 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:33:06.243651 kubelet[2189]: I0904 17:33:06.243629 2189 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:33:06.243651 kubelet[2189]: I0904 17:33:06.243646 2189 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:33:06.243805 kubelet[2189]: I0904 17:33:06.243784 2189 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:33:06.244354 kubelet[2189]: I0904 17:33:06.244339 2189 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:33:06.244354 kubelet[2189]: I0904 17:33:06.244352 2189 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:33:06.244430 kubelet[2189]: I0904 17:33:06.244373 2189 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:33:06.244430 kubelet[2189]: I0904 17:33:06.244390 2189 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:33:06.244849 kubelet[2189]: W0904 17:33:06.244784 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.244905 kubelet[2189]: E0904 17:33:06.244876 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.244994 kubelet[2189]: W0904 17:33:06.244957 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.245060 kubelet[2189]: E0904 17:33:06.245001 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.247902 kubelet[2189]: I0904 17:33:06.247885 2189 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:33:06.249043 kubelet[2189]: I0904 17:33:06.249018 2189 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:33:06.249094 kubelet[2189]: W0904 17:33:06.249070 2189 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:33:06.249930 kubelet[2189]: I0904 17:33:06.249914 2189 server.go:1264] "Started kubelet" Sep 4 17:33:06.250316 kubelet[2189]: I0904 17:33:06.250081 2189 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:33:06.250562 kubelet[2189]: I0904 17:33:06.250543 2189 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:33:06.250611 kubelet[2189]: I0904 17:33:06.250585 2189 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:33:06.251908 kubelet[2189]: I0904 17:33:06.251882 2189 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:33:06.251983 kubelet[2189]: I0904 17:33:06.251960 2189 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:33:06.254926 kubelet[2189]: I0904 17:33:06.254908 2189 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:33:06.255162 kubelet[2189]: I0904 17:33:06.255144 2189 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:33:06.255241 kubelet[2189]: I0904 17:33:06.255196 2189 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:33:06.256646 kubelet[2189]: W0904 17:33:06.256588 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.256972 kubelet[2189]: I0904 17:33:06.256951 2189 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:33:06.257059 kubelet[2189]: I0904 17:33:06.257042 2189 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:33:06.257550 kubelet[2189]: E0904 17:33:06.257528 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.258007 kubelet[2189]: E0904 17:33:06.257886 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.157:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.157:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ae6e698cfab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:33:06.249895851 +0000 UTC m=+0.491241280,LastTimestamp:2024-09-04 17:33:06.249895851 +0000 UTC m=+0.491241280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:33:06.258007 kubelet[2189]: E0904 17:33:06.257970 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="200ms" Sep 4 17:33:06.258167 kubelet[2189]: E0904 17:33:06.258073 2189 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:33:06.259452 kubelet[2189]: I0904 17:33:06.258536 2189 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:33:06.269773 kubelet[2189]: I0904 17:33:06.269660 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:33:06.270928 kubelet[2189]: I0904 17:33:06.270880 2189 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:33:06.270928 kubelet[2189]: I0904 17:33:06.270904 2189 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:33:06.270928 kubelet[2189]: I0904 17:33:06.270920 2189 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:33:06.271048 kubelet[2189]: E0904 17:33:06.270955 2189 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:33:06.274905 kubelet[2189]: W0904 17:33:06.274795 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.274958 kubelet[2189]: E0904 17:33:06.274912 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:06.275302 kubelet[2189]: I0904 17:33:06.275286 2189 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:33:06.275347 kubelet[2189]: I0904 17:33:06.275321 2189 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:33:06.275368 kubelet[2189]: I0904 17:33:06.275361 2189 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:33:06.356453 kubelet[2189]: I0904 17:33:06.356418 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:33:06.356729 kubelet[2189]: E0904 17:33:06.356698 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Sep 4 17:33:06.371899 kubelet[2189]: E0904 17:33:06.371863 2189 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:33:06.458521 kubelet[2189]: E0904 17:33:06.458487 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="400ms" Sep 4 17:33:06.558041 kubelet[2189]: I0904 17:33:06.557927 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:33:06.558273 kubelet[2189]: E0904 17:33:06.558242 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Sep 4 17:33:06.572387 kubelet[2189]: E0904 17:33:06.572355 2189 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:33:06.637467 kubelet[2189]: I0904 17:33:06.637433 2189 policy_none.go:49] "None policy: Start" Sep 4 17:33:06.638185 kubelet[2189]: I0904 17:33:06.638121 2189 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:33:06.638185 kubelet[2189]: I0904 17:33:06.638143 2189 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:33:06.644479 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:33:06.656469 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:33:06.659348 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:33:06.670679 kubelet[2189]: I0904 17:33:06.670646 2189 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:33:06.671100 kubelet[2189]: I0904 17:33:06.670873 2189 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:33:06.671100 kubelet[2189]: I0904 17:33:06.670997 2189 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:33:06.671893 kubelet[2189]: E0904 17:33:06.671863 2189 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:33:06.859872 kubelet[2189]: E0904 17:33:06.859683 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="800ms" Sep 4 17:33:06.960267 kubelet[2189]: I0904 17:33:06.960223 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:33:06.960617 kubelet[2189]: E0904 17:33:06.960576 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Sep 4 17:33:06.972778 kubelet[2189]: I0904 17:33:06.972725 2189 topology_manager.go:215] "Topology Admit Handler" podUID="2a992fb48436c3d86cff7bec73de6a2c" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:33:06.973862 kubelet[2189]: I0904 17:33:06.973838 2189 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:33:06.974650 kubelet[2189]: I0904 17:33:06.974625 2189 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:33:06.980357 systemd[1]: Created slice kubepods-burstable-pod2a992fb48436c3d86cff7bec73de6a2c.slice - libcontainer container kubepods-burstable-pod2a992fb48436c3d86cff7bec73de6a2c.slice. Sep 4 17:33:07.006447 systemd[1]: Created slice kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice - libcontainer container kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice. Sep 4 17:33:07.010333 systemd[1]: Created slice kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice - libcontainer container kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice. Sep 4 17:33:07.060055 kubelet[2189]: I0904 17:33:07.060000 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:07.060055 kubelet[2189]: I0904 17:33:07.060043 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:07.060055 kubelet[2189]: I0904 17:33:07.060060 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:07.060422 kubelet[2189]: I0904 17:33:07.060089 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a992fb48436c3d86cff7bec73de6a2c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a992fb48436c3d86cff7bec73de6a2c\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:07.060422 kubelet[2189]: I0904 17:33:07.060105 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:07.060422 kubelet[2189]: I0904 17:33:07.060160 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:07.060422 kubelet[2189]: I0904 17:33:07.060216 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:33:07.060422 kubelet[2189]: I0904 17:33:07.060245 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a992fb48436c3d86cff7bec73de6a2c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a992fb48436c3d86cff7bec73de6a2c\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:07.060566 kubelet[2189]: I0904 17:33:07.060263 2189 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a992fb48436c3d86cff7bec73de6a2c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a992fb48436c3d86cff7bec73de6a2c\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:07.237255 kubelet[2189]: W0904 17:33:07.237130 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.237255 kubelet[2189]: E0904 17:33:07.237181 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.304382 kubelet[2189]: E0904 17:33:07.304342 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:07.304914 containerd[1460]: time="2024-09-04T17:33:07.304883419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a992fb48436c3d86cff7bec73de6a2c,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:07.309054 kubelet[2189]: E0904 17:33:07.309035 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:07.309345 containerd[1460]: time="2024-09-04T17:33:07.309313031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:07.312584 kubelet[2189]: E0904 17:33:07.312554 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:07.312927 containerd[1460]: time="2024-09-04T17:33:07.312773013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:07.579572 kubelet[2189]: W0904 17:33:07.579444 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.579572 kubelet[2189]: E0904 17:33:07.579509 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.640005 kubelet[2189]: W0904 17:33:07.639966 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.640005 kubelet[2189]: E0904 17:33:07.640004 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.660522 kubelet[2189]: E0904 17:33:07.660483 2189 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.157:6443: connect: connection refused" interval="1.6s" Sep 4 17:33:07.750432 kubelet[2189]: W0904 17:33:07.750381 2189 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.750432 kubelet[2189]: E0904 17:33:07.750430 2189 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.157:6443: connect: connection refused Sep 4 17:33:07.761465 kubelet[2189]: I0904 17:33:07.761446 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:33:07.761685 kubelet[2189]: E0904 17:33:07.761640 2189 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.157:6443/api/v1/nodes\": dial tcp 10.0.0.157:6443: connect: connection refused" node="localhost" Sep 4 17:33:07.972898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335713724.mount: Deactivated successfully. Sep 4 17:33:07.981142 containerd[1460]: time="2024-09-04T17:33:07.981070055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:33:07.982082 containerd[1460]: time="2024-09-04T17:33:07.982040426Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:33:07.983040 containerd[1460]: time="2024-09-04T17:33:07.983011969Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:33:07.983973 containerd[1460]: time="2024-09-04T17:33:07.983933478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:33:07.984862 containerd[1460]: time="2024-09-04T17:33:07.984810975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:33:07.985794 containerd[1460]: time="2024-09-04T17:33:07.985736661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:33:07.986653 containerd[1460]: time="2024-09-04T17:33:07.986625660Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:33:07.990370 containerd[1460]: time="2024-09-04T17:33:07.990339969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:33:07.991170 containerd[1460]: time="2024-09-04T17:33:07.991142846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 678.28376ms" Sep 4 17:33:07.992513 containerd[1460]: time="2024-09-04T17:33:07.992484744Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.516235ms" Sep 4 17:33:07.993911 containerd[1460]: time="2024-09-04T17:33:07.993877227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 684.491148ms" Sep 4 17:33:08.009164 kubelet[2189]: E0904 17:33:08.009070 2189 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.157:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.157:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ae6e698cfab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:33:06.249895851 +0000 UTC m=+0.491241280,LastTimestamp:2024-09-04 17:33:06.249895851 +0000 UTC m=+0.491241280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:33:08.137459 containerd[1460]: time="2024-09-04T17:33:08.137373267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:08.137459 containerd[1460]: time="2024-09-04T17:33:08.137419293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:08.137938 containerd[1460]: time="2024-09-04T17:33:08.137432267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:08.137938 containerd[1460]: time="2024-09-04T17:33:08.137446174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:08.138095 containerd[1460]: time="2024-09-04T17:33:08.137776584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:08.138095 containerd[1460]: time="2024-09-04T17:33:08.137839582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:08.138095 containerd[1460]: time="2024-09-04T17:33:08.137865611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:08.138095 containerd[1460]: time="2024-09-04T17:33:08.137896308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:08.138256 containerd[1460]: time="2024-09-04T17:33:08.137752919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:08.138256 containerd[1460]: time="2024-09-04T17:33:08.137792814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:08.138256 containerd[1460]: time="2024-09-04T17:33:08.137809455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:08.138256 containerd[1460]: time="2024-09-04T17:33:08.137838119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:08.162054 systemd[1]: Started cri-containerd-452f494bb4a69cfdf6070145f45416ee56d0100d798179413e2ef30f24f22108.scope - libcontainer container 452f494bb4a69cfdf6070145f45416ee56d0100d798179413e2ef30f24f22108. Sep 4 17:33:08.163806 systemd[1]: Started cri-containerd-66ad00f82cab5469226be8b74995efc0f2281d402f9be17aae7000d42fc410c7.scope - libcontainer container 66ad00f82cab5469226be8b74995efc0f2281d402f9be17aae7000d42fc410c7. Sep 4 17:33:08.165374 systemd[1]: Started cri-containerd-7f182926bf63317fc2d4e86f0d8ca00980eccff8932e93beb127fa2ec25fda9f.scope - libcontainer container 7f182926bf63317fc2d4e86f0d8ca00980eccff8932e93beb127fa2ec25fda9f. Sep 4 17:33:08.201379 containerd[1460]: time="2024-09-04T17:33:08.199986085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a992fb48436c3d86cff7bec73de6a2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"452f494bb4a69cfdf6070145f45416ee56d0100d798179413e2ef30f24f22108\"" Sep 4 17:33:08.203913 kubelet[2189]: E0904 17:33:08.202840 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:08.205756 containerd[1460]: time="2024-09-04T17:33:08.205716017Z" level=info msg="CreateContainer within sandbox \"452f494bb4a69cfdf6070145f45416ee56d0100d798179413e2ef30f24f22108\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:33:08.210089 containerd[1460]: time="2024-09-04T17:33:08.210037015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ad00f82cab5469226be8b74995efc0f2281d402f9be17aae7000d42fc410c7\"" Sep 4 17:33:08.210865 kubelet[2189]: E0904 17:33:08.210675 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:08.211379 containerd[1460]: time="2024-09-04T17:33:08.211313440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f182926bf63317fc2d4e86f0d8ca00980eccff8932e93beb127fa2ec25fda9f\"" Sep 4 17:33:08.211719 kubelet[2189]: E0904 17:33:08.211684 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:08.213619 containerd[1460]: time="2024-09-04T17:33:08.213564113Z" level=info msg="CreateContainer within sandbox \"7f182926bf63317fc2d4e86f0d8ca00980eccff8932e93beb127fa2ec25fda9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:33:08.213659 containerd[1460]: time="2024-09-04T17:33:08.213636279Z" level=info msg="CreateContainer within sandbox \"66ad00f82cab5469226be8b74995efc0f2281d402f9be17aae7000d42fc410c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:33:08.231029 containerd[1460]: time="2024-09-04T17:33:08.230806307Z" level=info msg="CreateContainer within sandbox \"452f494bb4a69cfdf6070145f45416ee56d0100d798179413e2ef30f24f22108\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bf748fc31608755f5326f9413c588877ac91930523fef0707fcb056f5025223\"" Sep 4 17:33:08.231341 containerd[1460]: time="2024-09-04T17:33:08.231301035Z" level=info msg="StartContainer for \"8bf748fc31608755f5326f9413c588877ac91930523fef0707fcb056f5025223\"" Sep 4 17:33:08.238922 containerd[1460]: time="2024-09-04T17:33:08.238852125Z" level=info msg="CreateContainer within sandbox \"7f182926bf63317fc2d4e86f0d8ca00980eccff8932e93beb127fa2ec25fda9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"60c59d7badf96a6d47830f54daab36f7d85ff85c69fdb4f3023be302c924f73b\"" Sep 4 17:33:08.239571 containerd[1460]: time="2024-09-04T17:33:08.239491574Z" level=info msg="StartContainer for \"60c59d7badf96a6d47830f54daab36f7d85ff85c69fdb4f3023be302c924f73b\"" Sep 4 17:33:08.243627 containerd[1460]: time="2024-09-04T17:33:08.243331199Z" level=info msg="CreateContainer within sandbox \"66ad00f82cab5469226be8b74995efc0f2281d402f9be17aae7000d42fc410c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5cd50672a16557bfa89dc7cce1c2ff0f6f65eaef7ee2c66b9c98edf77238b6b0\"" Sep 4 17:33:08.243771 containerd[1460]: time="2024-09-04T17:33:08.243744104Z" level=info msg="StartContainer for \"5cd50672a16557bfa89dc7cce1c2ff0f6f65eaef7ee2c66b9c98edf77238b6b0\"" Sep 4 17:33:08.260036 systemd[1]: Started cri-containerd-8bf748fc31608755f5326f9413c588877ac91930523fef0707fcb056f5025223.scope - libcontainer container 8bf748fc31608755f5326f9413c588877ac91930523fef0707fcb056f5025223. Sep 4 17:33:08.263450 systemd[1]: Started cri-containerd-60c59d7badf96a6d47830f54daab36f7d85ff85c69fdb4f3023be302c924f73b.scope - libcontainer container 60c59d7badf96a6d47830f54daab36f7d85ff85c69fdb4f3023be302c924f73b. Sep 4 17:33:08.269462 systemd[1]: Started cri-containerd-5cd50672a16557bfa89dc7cce1c2ff0f6f65eaef7ee2c66b9c98edf77238b6b0.scope - libcontainer container 5cd50672a16557bfa89dc7cce1c2ff0f6f65eaef7ee2c66b9c98edf77238b6b0. Sep 4 17:33:08.307762 containerd[1460]: time="2024-09-04T17:33:08.307652463Z" level=info msg="StartContainer for \"60c59d7badf96a6d47830f54daab36f7d85ff85c69fdb4f3023be302c924f73b\" returns successfully" Sep 4 17:33:08.312743 containerd[1460]: time="2024-09-04T17:33:08.312418156Z" level=info msg="StartContainer for \"8bf748fc31608755f5326f9413c588877ac91930523fef0707fcb056f5025223\" returns successfully" Sep 4 17:33:08.320124 containerd[1460]: time="2024-09-04T17:33:08.320081325Z" level=info msg="StartContainer for \"5cd50672a16557bfa89dc7cce1c2ff0f6f65eaef7ee2c66b9c98edf77238b6b0\" returns successfully" Sep 4 17:33:09.288173 kubelet[2189]: E0904 17:33:09.288142 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:09.290815 kubelet[2189]: E0904 17:33:09.290797 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:09.294898 kubelet[2189]: E0904 17:33:09.294868 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:09.323379 kubelet[2189]: E0904 17:33:09.323334 2189 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:33:09.363399 kubelet[2189]: I0904 17:33:09.363350 2189 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:33:09.471052 kubelet[2189]: I0904 17:33:09.471007 2189 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:33:09.478108 kubelet[2189]: E0904 17:33:09.478062 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:09.579129 kubelet[2189]: E0904 17:33:09.578993 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:09.684434 kubelet[2189]: E0904 17:33:09.684379 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:09.785298 kubelet[2189]: E0904 17:33:09.785244 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:09.885847 kubelet[2189]: E0904 17:33:09.885772 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:09.986397 kubelet[2189]: E0904 17:33:09.986334 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.086950 kubelet[2189]: E0904 17:33:10.086896 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.187539 kubelet[2189]: E0904 17:33:10.187407 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.288484 kubelet[2189]: E0904 17:33:10.288420 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.294744 kubelet[2189]: E0904 17:33:10.294724 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:10.295636 kubelet[2189]: E0904 17:33:10.294977 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:10.295636 kubelet[2189]: E0904 17:33:10.295600 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:10.389025 kubelet[2189]: E0904 17:33:10.388981 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.489599 kubelet[2189]: E0904 17:33:10.489463 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.589914 kubelet[2189]: E0904 17:33:10.589876 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.690661 kubelet[2189]: E0904 17:33:10.690611 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:10.791602 kubelet[2189]: E0904 17:33:10.791470 2189 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:11.185980 systemd[1]: Reloading requested from client PID 2463 ('systemctl') (unit session-7.scope)... Sep 4 17:33:11.185995 systemd[1]: Reloading... Sep 4 17:33:11.247926 kubelet[2189]: I0904 17:33:11.247890 2189 apiserver.go:52] "Watching apiserver" Sep 4 17:33:11.256303 kubelet[2189]: I0904 17:33:11.256225 2189 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:33:11.263869 zram_generator::config[2506]: No configuration found. Sep 4 17:33:11.306591 kubelet[2189]: E0904 17:33:11.303465 2189 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:11.363865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:33:11.453935 systemd[1]: Reloading finished in 267 ms. Sep 4 17:33:11.504428 kubelet[2189]: I0904 17:33:11.504378 2189 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:33:11.504597 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:33:11.525079 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:33:11.525336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:33:11.540165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:33:11.687896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:33:11.696189 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:33:11.732412 kubelet[2545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:33:11.733839 kubelet[2545]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:33:11.733839 kubelet[2545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:33:11.733839 kubelet[2545]: I0904 17:33:11.732798 2545 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:33:11.737268 kubelet[2545]: I0904 17:33:11.737245 2545 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:33:11.737268 kubelet[2545]: I0904 17:33:11.737261 2545 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:33:11.737407 kubelet[2545]: I0904 17:33:11.737396 2545 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:33:11.738606 kubelet[2545]: I0904 17:33:11.738588 2545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:33:11.739537 kubelet[2545]: I0904 17:33:11.739522 2545 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:33:11.746929 kubelet[2545]: I0904 17:33:11.746900 2545 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:33:11.747152 kubelet[2545]: I0904 17:33:11.747120 2545 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:33:11.747284 kubelet[2545]: I0904 17:33:11.747145 2545 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:33:11.747362 kubelet[2545]: I0904 17:33:11.747299 2545 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:33:11.747362 kubelet[2545]: I0904 17:33:11.747310 2545 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:33:11.747362 kubelet[2545]: I0904 17:33:11.747354 2545 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:33:11.747445 kubelet[2545]: I0904 17:33:11.747433 2545 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:33:11.747445 kubelet[2545]: I0904 17:33:11.747444 2545 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:33:11.747503 kubelet[2545]: I0904 17:33:11.747462 2545 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:33:11.747503 kubelet[2545]: I0904 17:33:11.747480 2545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:33:11.748192 kubelet[2545]: I0904 17:33:11.748171 2545 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:33:11.748793 kubelet[2545]: I0904 17:33:11.748321 2545 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:33:11.748793 kubelet[2545]: I0904 17:33:11.748639 2545 server.go:1264] "Started kubelet" Sep 4 17:33:11.749892 kubelet[2545]: I0904 17:33:11.749853 2545 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:33:11.751926 kubelet[2545]: I0904 17:33:11.750280 2545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:33:11.751926 kubelet[2545]: I0904 17:33:11.750876 2545 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:33:11.752872 kubelet[2545]: I0904 17:33:11.749796 2545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:33:11.753087 kubelet[2545]: I0904 17:33:11.753061 2545 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:33:11.756334 kubelet[2545]: E0904 17:33:11.756278 2545 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:33:11.756481 kubelet[2545]: I0904 17:33:11.756465 2545 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:33:11.756576 kubelet[2545]: I0904 17:33:11.756563 2545 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:33:11.756728 kubelet[2545]: I0904 17:33:11.756716 2545 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:33:11.760741 kubelet[2545]: I0904 17:33:11.760696 2545 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:33:11.760849 kubelet[2545]: I0904 17:33:11.760788 2545 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:33:11.762776 kubelet[2545]: E0904 17:33:11.762717 2545 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:33:11.763294 kubelet[2545]: I0904 17:33:11.763265 2545 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:33:11.769745 kubelet[2545]: I0904 17:33:11.769701 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:33:11.770940 kubelet[2545]: I0904 17:33:11.770926 2545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:33:11.771010 kubelet[2545]: I0904 17:33:11.770950 2545 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:33:11.771010 kubelet[2545]: I0904 17:33:11.770967 2545 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:33:11.771068 kubelet[2545]: E0904 17:33:11.771006 2545 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:33:11.801278 kubelet[2545]: I0904 17:33:11.801252 2545 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:33:11.801278 kubelet[2545]: I0904 17:33:11.801270 2545 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:33:11.801278 kubelet[2545]: I0904 17:33:11.801287 2545 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:33:11.801433 kubelet[2545]: I0904 17:33:11.801417 2545 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:33:11.801464 kubelet[2545]: I0904 17:33:11.801426 2545 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:33:11.801464 kubelet[2545]: I0904 17:33:11.801442 2545 policy_none.go:49] "None policy: Start" Sep 4 17:33:11.801842 kubelet[2545]: I0904 17:33:11.801815 2545 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:33:11.801883 kubelet[2545]: I0904 17:33:11.801847 2545 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:33:11.801988 kubelet[2545]: I0904 17:33:11.801976 2545 state_mem.go:75] "Updated machine memory state" Sep 4 17:33:11.805760 kubelet[2545]: I0904 17:33:11.805726 2545 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:33:11.806126 kubelet[2545]: I0904 17:33:11.806095 2545 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:33:11.806212 kubelet[2545]: I0904 17:33:11.806202 2545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:33:11.860562 kubelet[2545]: I0904 17:33:11.860525 2545 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:33:11.866763 kubelet[2545]: I0904 17:33:11.866742 2545 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:33:11.866892 kubelet[2545]: I0904 17:33:11.866817 2545 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:33:11.871707 kubelet[2545]: I0904 17:33:11.871664 2545 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:33:11.871832 kubelet[2545]: I0904 17:33:11.871779 2545 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:33:11.871871 kubelet[2545]: I0904 17:33:11.871814 2545 topology_manager.go:215] "Topology Admit Handler" podUID="2a992fb48436c3d86cff7bec73de6a2c" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:33:11.877404 kubelet[2545]: E0904 17:33:11.877232 2545 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:12.058644 kubelet[2545]: I0904 17:33:12.058512 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:12.058644 kubelet[2545]: I0904 17:33:12.058553 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:12.058644 kubelet[2545]: I0904 17:33:12.058583 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a992fb48436c3d86cff7bec73de6a2c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a992fb48436c3d86cff7bec73de6a2c\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:12.058644 kubelet[2545]: I0904 17:33:12.058609 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a992fb48436c3d86cff7bec73de6a2c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a992fb48436c3d86cff7bec73de6a2c\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:12.058956 kubelet[2545]: I0904 17:33:12.058648 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:12.058956 kubelet[2545]: I0904 17:33:12.058710 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:12.058956 kubelet[2545]: I0904 17:33:12.058738 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:12.058956 kubelet[2545]: I0904 17:33:12.058775 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:33:12.058956 kubelet[2545]: I0904 17:33:12.058794 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a992fb48436c3d86cff7bec73de6a2c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a992fb48436c3d86cff7bec73de6a2c\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:12.179762 kubelet[2545]: E0904 17:33:12.179479 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:12.179762 kubelet[2545]: E0904 17:33:12.179501 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:12.179762 kubelet[2545]: E0904 17:33:12.179687 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:12.747847 kubelet[2545]: I0904 17:33:12.747791 2545 apiserver.go:52] "Watching apiserver" Sep 4 17:33:12.757169 kubelet[2545]: I0904 17:33:12.757135 2545 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:33:12.782792 kubelet[2545]: E0904 17:33:12.782760 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:12.971234 kubelet[2545]: E0904 17:33:12.971188 2545 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:33:12.972925 kubelet[2545]: E0904 17:33:12.971668 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:12.972925 kubelet[2545]: E0904 17:33:12.971775 2545 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:33:12.972925 kubelet[2545]: E0904 17:33:12.972188 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:12.982848 kubelet[2545]: I0904 17:33:12.982748 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9827124280000001 podStartE2EDuration="1.982712428s" podCreationTimestamp="2024-09-04 17:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:12.972414665 +0000 UTC m=+1.272305318" watchObservedRunningTime="2024-09-04 17:33:12.982712428 +0000 UTC m=+1.282603081" Sep 4 17:33:12.994511 kubelet[2545]: I0904 17:33:12.994417 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9943623719999999 podStartE2EDuration="1.994362372s" podCreationTimestamp="2024-09-04 17:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:12.994353946 +0000 UTC m=+1.294244599" watchObservedRunningTime="2024-09-04 17:33:12.994362372 +0000 UTC m=+1.294253026" Sep 4 17:33:12.994735 kubelet[2545]: I0904 17:33:12.994601 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.994594277 podStartE2EDuration="1.994594277s" podCreationTimestamp="2024-09-04 17:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:12.983666707 +0000 UTC m=+1.283557360" watchObservedRunningTime="2024-09-04 17:33:12.994594277 +0000 UTC m=+1.294484930" Sep 4 17:33:13.785685 kubelet[2545]: E0904 17:33:13.785640 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:13.786615 kubelet[2545]: E0904 17:33:13.786585 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:16.419959 sudo[1639]: pam_unix(sudo:session): session closed for user root Sep 4 17:33:16.421700 sshd[1636]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:16.425993 systemd[1]: sshd@6-10.0.0.157:22-10.0.0.1:59018.service: Deactivated successfully. Sep 4 17:33:16.427892 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:33:16.428120 systemd[1]: session-7.scope: Consumed 4.489s CPU time, 142.1M memory peak, 0B memory swap peak. Sep 4 17:33:16.428550 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:33:16.429389 systemd-logind[1444]: Removed session 7. Sep 4 17:33:16.829250 kubelet[2545]: E0904 17:33:16.829138 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:21.417980 kubelet[2545]: E0904 17:33:21.417945 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:21.795288 kubelet[2545]: E0904 17:33:21.795171 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:22.167657 update_engine[1445]: I0904 17:33:22.167603 1445 update_attempter.cc:509] Updating boot flags... Sep 4 17:33:22.168817 kubelet[2545]: E0904 17:33:22.168399 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:22.195859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2643) Sep 4 17:33:22.233100 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2646) Sep 4 17:33:22.260892 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2646) Sep 4 17:33:22.796769 kubelet[2545]: E0904 17:33:22.796736 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:26.832861 kubelet[2545]: E0904 17:33:26.832810 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:28.157257 kubelet[2545]: I0904 17:33:28.157208 2545 topology_manager.go:215] "Topology Admit Handler" podUID="44c0fc50-453e-4219-9356-83d3f45ad042" podNamespace="kube-system" podName="kube-proxy-d8p7m" Sep 4 17:33:28.160855 kubelet[2545]: W0904 17:33:28.159597 2545 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 4 17:33:28.160855 kubelet[2545]: E0904 17:33:28.159631 2545 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 4 17:33:28.163366 kubelet[2545]: W0904 17:33:28.163331 2545 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 4 17:33:28.163366 kubelet[2545]: E0904 17:33:28.163364 2545 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 4 17:33:28.165758 systemd[1]: Created slice kubepods-besteffort-pod44c0fc50_453e_4219_9356_83d3f45ad042.slice - libcontainer container kubepods-besteffort-pod44c0fc50_453e_4219_9356_83d3f45ad042.slice. Sep 4 17:33:28.166205 kubelet[2545]: I0904 17:33:28.166077 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44c0fc50-453e-4219-9356-83d3f45ad042-xtables-lock\") pod \"kube-proxy-d8p7m\" (UID: \"44c0fc50-453e-4219-9356-83d3f45ad042\") " pod="kube-system/kube-proxy-d8p7m" Sep 4 17:33:28.166205 kubelet[2545]: I0904 17:33:28.166106 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2s75\" (UniqueName: \"kubernetes.io/projected/44c0fc50-453e-4219-9356-83d3f45ad042-kube-api-access-n2s75\") pod \"kube-proxy-d8p7m\" (UID: \"44c0fc50-453e-4219-9356-83d3f45ad042\") " pod="kube-system/kube-proxy-d8p7m" Sep 4 17:33:28.166205 kubelet[2545]: I0904 17:33:28.166122 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44c0fc50-453e-4219-9356-83d3f45ad042-kube-proxy\") pod \"kube-proxy-d8p7m\" (UID: \"44c0fc50-453e-4219-9356-83d3f45ad042\") " pod="kube-system/kube-proxy-d8p7m" Sep 4 17:33:28.166205 kubelet[2545]: I0904 17:33:28.166137 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44c0fc50-453e-4219-9356-83d3f45ad042-lib-modules\") pod \"kube-proxy-d8p7m\" (UID: \"44c0fc50-453e-4219-9356-83d3f45ad042\") " pod="kube-system/kube-proxy-d8p7m" Sep 4 17:33:28.181563 kubelet[2545]: I0904 17:33:28.181528 2545 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:33:28.181918 containerd[1460]: time="2024-09-04T17:33:28.181866061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:33:28.182332 kubelet[2545]: I0904 17:33:28.182126 2545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:33:28.193237 kubelet[2545]: I0904 17:33:28.193164 2545 topology_manager.go:215] "Topology Admit Handler" podUID="0d25edb0-2937-4e71-a0fd-91ebf04194f3" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-j4sg6" Sep 4 17:33:28.200741 systemd[1]: Created slice kubepods-besteffort-pod0d25edb0_2937_4e71_a0fd_91ebf04194f3.slice - libcontainer container kubepods-besteffort-pod0d25edb0_2937_4e71_a0fd_91ebf04194f3.slice. Sep 4 17:33:28.266983 kubelet[2545]: I0904 17:33:28.266936 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slvsd\" (UniqueName: \"kubernetes.io/projected/0d25edb0-2937-4e71-a0fd-91ebf04194f3-kube-api-access-slvsd\") pod \"tigera-operator-77f994b5bb-j4sg6\" (UID: \"0d25edb0-2937-4e71-a0fd-91ebf04194f3\") " pod="tigera-operator/tigera-operator-77f994b5bb-j4sg6" Sep 4 17:33:28.266983 kubelet[2545]: I0904 17:33:28.266967 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d25edb0-2937-4e71-a0fd-91ebf04194f3-var-lib-calico\") pod \"tigera-operator-77f994b5bb-j4sg6\" (UID: \"0d25edb0-2937-4e71-a0fd-91ebf04194f3\") " pod="tigera-operator/tigera-operator-77f994b5bb-j4sg6" Sep 4 17:33:28.503753 containerd[1460]: time="2024-09-04T17:33:28.503642792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-j4sg6,Uid:0d25edb0-2937-4e71-a0fd-91ebf04194f3,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:33:28.528621 containerd[1460]: time="2024-09-04T17:33:28.528031284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:28.528621 containerd[1460]: time="2024-09-04T17:33:28.528585502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:28.528621 containerd[1460]: time="2024-09-04T17:33:28.528602193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:28.528621 containerd[1460]: time="2024-09-04T17:33:28.528612553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:28.549958 systemd[1]: Started cri-containerd-e32b3b3af9e30e44740dd24f5699bbd98cf8e74eec1f2b42880e7f77060e2b0e.scope - libcontainer container e32b3b3af9e30e44740dd24f5699bbd98cf8e74eec1f2b42880e7f77060e2b0e. Sep 4 17:33:28.584019 containerd[1460]: time="2024-09-04T17:33:28.583982848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-j4sg6,Uid:0d25edb0-2937-4e71-a0fd-91ebf04194f3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e32b3b3af9e30e44740dd24f5699bbd98cf8e74eec1f2b42880e7f77060e2b0e\"" Sep 4 17:33:28.586507 containerd[1460]: time="2024-09-04T17:33:28.586431838Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:33:29.267800 kubelet[2545]: E0904 17:33:29.267765 2545 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:33:29.268194 kubelet[2545]: E0904 17:33:29.267874 2545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44c0fc50-453e-4219-9356-83d3f45ad042-kube-proxy podName:44c0fc50-453e-4219-9356-83d3f45ad042 nodeName:}" failed. No retries permitted until 2024-09-04 17:33:29.767838103 +0000 UTC m=+18.067728756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/44c0fc50-453e-4219-9356-83d3f45ad042-kube-proxy") pod "kube-proxy-d8p7m" (UID: "44c0fc50-453e-4219-9356-83d3f45ad042") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:33:29.271338 kubelet[2545]: E0904 17:33:29.271308 2545 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:33:29.271338 kubelet[2545]: E0904 17:33:29.271333 2545 projected.go:200] Error preparing data for projected volume kube-api-access-n2s75 for pod kube-system/kube-proxy-d8p7m: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:33:29.271434 kubelet[2545]: E0904 17:33:29.271375 2545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44c0fc50-453e-4219-9356-83d3f45ad042-kube-api-access-n2s75 podName:44c0fc50-453e-4219-9356-83d3f45ad042 nodeName:}" failed. No retries permitted until 2024-09-04 17:33:29.771364256 +0000 UTC m=+18.071254909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n2s75" (UniqueName: "kubernetes.io/projected/44c0fc50-453e-4219-9356-83d3f45ad042-kube-api-access-n2s75") pod "kube-proxy-d8p7m" (UID: "44c0fc50-453e-4219-9356-83d3f45ad042") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:33:29.977384 kubelet[2545]: E0904 17:33:29.977329 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:29.977901 containerd[1460]: time="2024-09-04T17:33:29.977836313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8p7m,Uid:44c0fc50-453e-4219-9356-83d3f45ad042,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:30.003950 containerd[1460]: time="2024-09-04T17:33:30.003845863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:30.004213 containerd[1460]: time="2024-09-04T17:33:30.003931866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:30.004213 containerd[1460]: time="2024-09-04T17:33:30.003982481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:30.004213 containerd[1460]: time="2024-09-04T17:33:30.004014983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:30.029957 systemd[1]: Started cri-containerd-ded9874451be1f407914e517a6e5ff6182a45931c7d05a1a5cb76355a67dc402.scope - libcontainer container ded9874451be1f407914e517a6e5ff6182a45931c7d05a1a5cb76355a67dc402. Sep 4 17:33:30.052877 containerd[1460]: time="2024-09-04T17:33:30.052837630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d8p7m,Uid:44c0fc50-453e-4219-9356-83d3f45ad042,Namespace:kube-system,Attempt:0,} returns sandbox id \"ded9874451be1f407914e517a6e5ff6182a45931c7d05a1a5cb76355a67dc402\"" Sep 4 17:33:30.053547 kubelet[2545]: E0904 17:33:30.053519 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:30.055304 containerd[1460]: time="2024-09-04T17:33:30.055238413Z" level=info msg="CreateContainer within sandbox \"ded9874451be1f407914e517a6e5ff6182a45931c7d05a1a5cb76355a67dc402\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:33:30.081420 containerd[1460]: time="2024-09-04T17:33:30.081362772Z" level=info msg="CreateContainer within sandbox \"ded9874451be1f407914e517a6e5ff6182a45931c7d05a1a5cb76355a67dc402\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"336d640876965b56c601b1fc428986f086476ab46c21941eb398a3d53b0cfb83\"" Sep 4 17:33:30.081859 containerd[1460]: time="2024-09-04T17:33:30.081811741Z" level=info msg="StartContainer for \"336d640876965b56c601b1fc428986f086476ab46c21941eb398a3d53b0cfb83\"" Sep 4 17:33:30.118965 systemd[1]: Started cri-containerd-336d640876965b56c601b1fc428986f086476ab46c21941eb398a3d53b0cfb83.scope - libcontainer container 336d640876965b56c601b1fc428986f086476ab46c21941eb398a3d53b0cfb83. Sep 4 17:33:30.149250 containerd[1460]: time="2024-09-04T17:33:30.149195933Z" level=info msg="StartContainer for \"336d640876965b56c601b1fc428986f086476ab46c21941eb398a3d53b0cfb83\" returns successfully" Sep 4 17:33:30.585066 containerd[1460]: time="2024-09-04T17:33:30.584995760Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:30.585760 containerd[1460]: time="2024-09-04T17:33:30.585701664Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136489" Sep 4 17:33:30.586934 containerd[1460]: time="2024-09-04T17:33:30.586900197Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:30.589208 containerd[1460]: time="2024-09-04T17:33:30.589171266Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:30.589719 containerd[1460]: time="2024-09-04T17:33:30.589687562Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.003228181s" Sep 4 17:33:30.589761 containerd[1460]: time="2024-09-04T17:33:30.589716997Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:33:30.591539 containerd[1460]: time="2024-09-04T17:33:30.591509162Z" level=info msg="CreateContainer within sandbox \"e32b3b3af9e30e44740dd24f5699bbd98cf8e74eec1f2b42880e7f77060e2b0e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:33:30.603802 containerd[1460]: time="2024-09-04T17:33:30.603756532Z" level=info msg="CreateContainer within sandbox \"e32b3b3af9e30e44740dd24f5699bbd98cf8e74eec1f2b42880e7f77060e2b0e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f5d181cf758ee3a0e779923524217fcf18deb1be1d7ee575a7d49ccb437ea402\"" Sep 4 17:33:30.604233 containerd[1460]: time="2024-09-04T17:33:30.604207845Z" level=info msg="StartContainer for \"f5d181cf758ee3a0e779923524217fcf18deb1be1d7ee575a7d49ccb437ea402\"" Sep 4 17:33:30.626972 systemd[1]: Started cri-containerd-f5d181cf758ee3a0e779923524217fcf18deb1be1d7ee575a7d49ccb437ea402.scope - libcontainer container f5d181cf758ee3a0e779923524217fcf18deb1be1d7ee575a7d49ccb437ea402. Sep 4 17:33:30.652499 containerd[1460]: time="2024-09-04T17:33:30.652461507Z" level=info msg="StartContainer for \"f5d181cf758ee3a0e779923524217fcf18deb1be1d7ee575a7d49ccb437ea402\" returns successfully" Sep 4 17:33:30.786576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078191200.mount: Deactivated successfully. Sep 4 17:33:30.809552 kubelet[2545]: E0904 17:33:30.809531 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:30.822843 kubelet[2545]: I0904 17:33:30.822765 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d8p7m" podStartSLOduration=2.822748974 podStartE2EDuration="2.822748974s" podCreationTimestamp="2024-09-04 17:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:30.822655298 +0000 UTC m=+19.122545951" watchObservedRunningTime="2024-09-04 17:33:30.822748974 +0000 UTC m=+19.122639627" Sep 4 17:33:30.823193 kubelet[2545]: I0904 17:33:30.822892 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-j4sg6" podStartSLOduration=0.818526174 podStartE2EDuration="2.822886214s" podCreationTimestamp="2024-09-04 17:33:28 +0000 UTC" firstStartedPulling="2024-09-04 17:33:28.586069373 +0000 UTC m=+16.885960026" lastFinishedPulling="2024-09-04 17:33:30.590429413 +0000 UTC m=+18.890320066" observedRunningTime="2024-09-04 17:33:30.81471978 +0000 UTC m=+19.114610433" watchObservedRunningTime="2024-09-04 17:33:30.822886214 +0000 UTC m=+19.122776867" Sep 4 17:33:33.412033 kubelet[2545]: I0904 17:33:33.411942 2545 topology_manager.go:215] "Topology Admit Handler" podUID="bef752ca-a069-4b39-84ca-d7fa08806299" podNamespace="calico-system" podName="calico-typha-6c6b84b55c-hjggt" Sep 4 17:33:33.427049 systemd[1]: Created slice kubepods-besteffort-podbef752ca_a069_4b39_84ca_d7fa08806299.slice - libcontainer container kubepods-besteffort-podbef752ca_a069_4b39_84ca_d7fa08806299.slice. Sep 4 17:33:33.436228 kubelet[2545]: I0904 17:33:33.436178 2545 topology_manager.go:215] "Topology Admit Handler" podUID="9382fd9e-2ed1-4a18-8850-9662b5f104e9" podNamespace="calico-system" podName="calico-node-5cp78" Sep 4 17:33:33.444517 systemd[1]: Created slice kubepods-besteffort-pod9382fd9e_2ed1_4a18_8850_9662b5f104e9.slice - libcontainer container kubepods-besteffort-pod9382fd9e_2ed1_4a18_8850_9662b5f104e9.slice. Sep 4 17:33:33.502492 kubelet[2545]: I0904 17:33:33.502436 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bef752ca-a069-4b39-84ca-d7fa08806299-typha-certs\") pod \"calico-typha-6c6b84b55c-hjggt\" (UID: \"bef752ca-a069-4b39-84ca-d7fa08806299\") " pod="calico-system/calico-typha-6c6b84b55c-hjggt" Sep 4 17:33:33.502492 kubelet[2545]: I0904 17:33:33.502468 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-policysync\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502492 kubelet[2545]: I0904 17:33:33.502487 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-var-lib-calico\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502492 kubelet[2545]: I0904 17:33:33.502499 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-cni-bin-dir\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502733 kubelet[2545]: I0904 17:33:33.502517 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9382fd9e-2ed1-4a18-8850-9662b5f104e9-node-certs\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502733 kubelet[2545]: I0904 17:33:33.502534 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bef752ca-a069-4b39-84ca-d7fa08806299-tigera-ca-bundle\") pod \"calico-typha-6c6b84b55c-hjggt\" (UID: \"bef752ca-a069-4b39-84ca-d7fa08806299\") " pod="calico-system/calico-typha-6c6b84b55c-hjggt" Sep 4 17:33:33.502733 kubelet[2545]: I0904 17:33:33.502551 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9382fd9e-2ed1-4a18-8850-9662b5f104e9-tigera-ca-bundle\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502733 kubelet[2545]: I0904 17:33:33.502600 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99l6f\" (UniqueName: \"kubernetes.io/projected/9382fd9e-2ed1-4a18-8850-9662b5f104e9-kube-api-access-99l6f\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502733 kubelet[2545]: I0904 17:33:33.502616 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4pv2\" (UniqueName: \"kubernetes.io/projected/bef752ca-a069-4b39-84ca-d7fa08806299-kube-api-access-z4pv2\") pod \"calico-typha-6c6b84b55c-hjggt\" (UID: \"bef752ca-a069-4b39-84ca-d7fa08806299\") " pod="calico-system/calico-typha-6c6b84b55c-hjggt" Sep 4 17:33:33.502903 kubelet[2545]: I0904 17:33:33.502629 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-xtables-lock\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502903 kubelet[2545]: I0904 17:33:33.502642 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-cni-log-dir\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502903 kubelet[2545]: I0904 17:33:33.502656 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-cni-net-dir\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502903 kubelet[2545]: I0904 17:33:33.502688 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-flexvol-driver-host\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.502903 kubelet[2545]: I0904 17:33:33.502701 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-lib-modules\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.503026 kubelet[2545]: I0904 17:33:33.502714 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9382fd9e-2ed1-4a18-8850-9662b5f104e9-var-run-calico\") pod \"calico-node-5cp78\" (UID: \"9382fd9e-2ed1-4a18-8850-9662b5f104e9\") " pod="calico-system/calico-node-5cp78" Sep 4 17:33:33.546870 kubelet[2545]: I0904 17:33:33.546808 2545 topology_manager.go:215] "Topology Admit Handler" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" podNamespace="calico-system" podName="csi-node-driver-2m97v" Sep 4 17:33:33.547521 kubelet[2545]: E0904 17:33:33.547118 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m97v" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" Sep 4 17:33:33.602944 kubelet[2545]: I0904 17:33:33.602907 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3ad58bb-75f5-444f-ace4-e9ea2e8aac02-kubelet-dir\") pod \"csi-node-driver-2m97v\" (UID: \"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02\") " pod="calico-system/csi-node-driver-2m97v" Sep 4 17:33:33.603107 kubelet[2545]: I0904 17:33:33.603001 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b3ad58bb-75f5-444f-ace4-e9ea2e8aac02-varrun\") pod \"csi-node-driver-2m97v\" (UID: \"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02\") " pod="calico-system/csi-node-driver-2m97v" Sep 4 17:33:33.603107 kubelet[2545]: I0904 17:33:33.603018 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b3ad58bb-75f5-444f-ace4-e9ea2e8aac02-registration-dir\") pod \"csi-node-driver-2m97v\" (UID: \"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02\") " pod="calico-system/csi-node-driver-2m97v" Sep 4 17:33:33.603107 kubelet[2545]: I0904 17:33:33.603047 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b3ad58bb-75f5-444f-ace4-e9ea2e8aac02-socket-dir\") pod \"csi-node-driver-2m97v\" (UID: \"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02\") " pod="calico-system/csi-node-driver-2m97v" Sep 4 17:33:33.603107 kubelet[2545]: I0904 17:33:33.603060 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gcb\" (UniqueName: \"kubernetes.io/projected/b3ad58bb-75f5-444f-ace4-e9ea2e8aac02-kube-api-access-d2gcb\") pod \"csi-node-driver-2m97v\" (UID: \"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02\") " pod="calico-system/csi-node-driver-2m97v" Sep 4 17:33:33.605413 kubelet[2545]: E0904 17:33:33.605370 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.605413 kubelet[2545]: W0904 17:33:33.605399 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.605682 kubelet[2545]: E0904 17:33:33.605435 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.607227 kubelet[2545]: E0904 17:33:33.607199 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.607227 kubelet[2545]: W0904 17:33:33.607216 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.607227 kubelet[2545]: E0904 17:33:33.607231 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.613085 kubelet[2545]: E0904 17:33:33.613014 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.613085 kubelet[2545]: W0904 17:33:33.613027 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.613085 kubelet[2545]: E0904 17:33:33.613050 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.613600 kubelet[2545]: E0904 17:33:33.613408 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.613600 kubelet[2545]: W0904 17:33:33.613421 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.613600 kubelet[2545]: E0904 17:33:33.613474 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.614143 kubelet[2545]: E0904 17:33:33.614049 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.614143 kubelet[2545]: W0904 17:33:33.614063 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.614230 kubelet[2545]: E0904 17:33:33.614163 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.614536 kubelet[2545]: E0904 17:33:33.614468 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.614536 kubelet[2545]: W0904 17:33:33.614481 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.614621 kubelet[2545]: E0904 17:33:33.614551 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.614729 kubelet[2545]: E0904 17:33:33.614703 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.614729 kubelet[2545]: W0904 17:33:33.614712 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.614913 kubelet[2545]: E0904 17:33:33.614792 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.615500 kubelet[2545]: E0904 17:33:33.615475 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.615500 kubelet[2545]: W0904 17:33:33.615491 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.615591 kubelet[2545]: E0904 17:33:33.615575 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.618909 kubelet[2545]: E0904 17:33:33.618811 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.619017 kubelet[2545]: W0904 17:33:33.618911 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.619017 kubelet[2545]: E0904 17:33:33.618938 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.619229 kubelet[2545]: E0904 17:33:33.619204 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.619363 kubelet[2545]: W0904 17:33:33.619309 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.619363 kubelet[2545]: E0904 17:33:33.619327 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.704254 kubelet[2545]: E0904 17:33:33.704030 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.704254 kubelet[2545]: W0904 17:33:33.704053 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.704254 kubelet[2545]: E0904 17:33:33.704072 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.705202 kubelet[2545]: E0904 17:33:33.704375 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.705202 kubelet[2545]: W0904 17:33:33.704393 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.705202 kubelet[2545]: E0904 17:33:33.704414 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.705202 kubelet[2545]: E0904 17:33:33.704766 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.705202 kubelet[2545]: W0904 17:33:33.704809 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.705202 kubelet[2545]: E0904 17:33:33.704865 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.705202 kubelet[2545]: E0904 17:33:33.705203 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.705936 kubelet[2545]: W0904 17:33:33.705213 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.705936 kubelet[2545]: E0904 17:33:33.705228 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.705936 kubelet[2545]: E0904 17:33:33.705488 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.705936 kubelet[2545]: W0904 17:33:33.705529 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.705936 kubelet[2545]: E0904 17:33:33.705566 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.705936 kubelet[2545]: E0904 17:33:33.705863 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.705936 kubelet[2545]: W0904 17:33:33.705873 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.705936 kubelet[2545]: E0904 17:33:33.705906 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.706230 kubelet[2545]: E0904 17:33:33.706100 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.706230 kubelet[2545]: W0904 17:33:33.706110 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.706230 kubelet[2545]: E0904 17:33:33.706138 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.706329 kubelet[2545]: E0904 17:33:33.706319 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.706329 kubelet[2545]: W0904 17:33:33.706328 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.706467 kubelet[2545]: E0904 17:33:33.706447 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.706587 kubelet[2545]: E0904 17:33:33.706558 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.706587 kubelet[2545]: W0904 17:33:33.706570 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.706667 kubelet[2545]: E0904 17:33:33.706588 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.706876 kubelet[2545]: E0904 17:33:33.706858 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.706876 kubelet[2545]: W0904 17:33:33.706872 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.706984 kubelet[2545]: E0904 17:33:33.706908 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.707195 kubelet[2545]: E0904 17:33:33.707178 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.707195 kubelet[2545]: W0904 17:33:33.707191 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.707276 kubelet[2545]: E0904 17:33:33.707223 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.707459 kubelet[2545]: E0904 17:33:33.707429 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.707459 kubelet[2545]: W0904 17:33:33.707441 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.707531 kubelet[2545]: E0904 17:33:33.707491 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.707715 kubelet[2545]: E0904 17:33:33.707688 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.707715 kubelet[2545]: W0904 17:33:33.707702 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.707853 kubelet[2545]: E0904 17:33:33.707832 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.708049 kubelet[2545]: E0904 17:33:33.708034 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.708049 kubelet[2545]: W0904 17:33:33.708044 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.708143 kubelet[2545]: E0904 17:33:33.708068 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.708242 kubelet[2545]: E0904 17:33:33.708227 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.708242 kubelet[2545]: W0904 17:33:33.708237 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.708327 kubelet[2545]: E0904 17:33:33.708259 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.708484 kubelet[2545]: E0904 17:33:33.708466 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.708484 kubelet[2545]: W0904 17:33:33.708479 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.708568 kubelet[2545]: E0904 17:33:33.708536 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.708803 kubelet[2545]: E0904 17:33:33.708775 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.708803 kubelet[2545]: W0904 17:33:33.708790 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.708803 kubelet[2545]: E0904 17:33:33.708808 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.710483 kubelet[2545]: E0904 17:33:33.710458 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.710483 kubelet[2545]: W0904 17:33:33.710475 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.710592 kubelet[2545]: E0904 17:33:33.710500 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.710711 kubelet[2545]: E0904 17:33:33.710682 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.710711 kubelet[2545]: W0904 17:33:33.710695 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.710779 kubelet[2545]: E0904 17:33:33.710746 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.710951 kubelet[2545]: E0904 17:33:33.710930 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.710951 kubelet[2545]: W0904 17:33:33.710946 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.711255 kubelet[2545]: E0904 17:33:33.711134 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.711255 kubelet[2545]: E0904 17:33:33.711238 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.711255 kubelet[2545]: W0904 17:33:33.711249 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.711374 kubelet[2545]: E0904 17:33:33.711346 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.711506 kubelet[2545]: E0904 17:33:33.711492 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.711539 kubelet[2545]: W0904 17:33:33.711506 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.711651 kubelet[2545]: E0904 17:33:33.711589 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.711773 kubelet[2545]: E0904 17:33:33.711760 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.711800 kubelet[2545]: W0904 17:33:33.711773 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.711800 kubelet[2545]: E0904 17:33:33.711790 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.712072 kubelet[2545]: E0904 17:33:33.712059 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.712072 kubelet[2545]: W0904 17:33:33.712070 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.712147 kubelet[2545]: E0904 17:33:33.712087 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.712303 kubelet[2545]: E0904 17:33:33.712292 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.712303 kubelet[2545]: W0904 17:33:33.712302 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.712366 kubelet[2545]: E0904 17:33:33.712313 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.721905 kubelet[2545]: E0904 17:33:33.721871 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:33.721905 kubelet[2545]: W0904 17:33:33.721898 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:33.722058 kubelet[2545]: E0904 17:33:33.721921 2545 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:33.732052 kubelet[2545]: E0904 17:33:33.732022 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:33.732713 containerd[1460]: time="2024-09-04T17:33:33.732665053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c6b84b55c-hjggt,Uid:bef752ca-a069-4b39-84ca-d7fa08806299,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:33.748200 kubelet[2545]: E0904 17:33:33.748170 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:33.748743 containerd[1460]: time="2024-09-04T17:33:33.748705753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cp78,Uid:9382fd9e-2ed1-4a18-8850-9662b5f104e9,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:33.827853 containerd[1460]: time="2024-09-04T17:33:33.826415750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:33.827853 containerd[1460]: time="2024-09-04T17:33:33.826473149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:33.827853 containerd[1460]: time="2024-09-04T17:33:33.826496884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:33.827853 containerd[1460]: time="2024-09-04T17:33:33.826514547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:33.834875 containerd[1460]: time="2024-09-04T17:33:33.833603820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:33.834875 containerd[1460]: time="2024-09-04T17:33:33.833661259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:33.834875 containerd[1460]: time="2024-09-04T17:33:33.833678181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:33.834875 containerd[1460]: time="2024-09-04T17:33:33.833690784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:33.854406 systemd[1]: Started cri-containerd-aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51.scope - libcontainer container aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51. Sep 4 17:33:33.873109 systemd[1]: Started cri-containerd-91dc76161c10e79f6b884136f311a8bb3ef1047250283425ac97252fb87d6fe2.scope - libcontainer container 91dc76161c10e79f6b884136f311a8bb3ef1047250283425ac97252fb87d6fe2. Sep 4 17:33:33.904554 containerd[1460]: time="2024-09-04T17:33:33.904506707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cp78,Uid:9382fd9e-2ed1-4a18-8850-9662b5f104e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51\"" Sep 4 17:33:33.908351 kubelet[2545]: E0904 17:33:33.908327 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:33.917093 containerd[1460]: time="2024-09-04T17:33:33.917049028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:33:33.928016 containerd[1460]: time="2024-09-04T17:33:33.927893776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c6b84b55c-hjggt,Uid:bef752ca-a069-4b39-84ca-d7fa08806299,Namespace:calico-system,Attempt:0,} returns sandbox id \"91dc76161c10e79f6b884136f311a8bb3ef1047250283425ac97252fb87d6fe2\"" Sep 4 17:33:33.929128 kubelet[2545]: E0904 17:33:33.928843 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:35.329645 containerd[1460]: time="2024-09-04T17:33:35.329582169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:35.330589 containerd[1460]: time="2024-09-04T17:33:35.330532551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:33:35.331777 containerd[1460]: time="2024-09-04T17:33:35.331743084Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:35.334042 containerd[1460]: time="2024-09-04T17:33:35.334012342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:35.334569 containerd[1460]: time="2024-09-04T17:33:35.334539425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.417455432s" Sep 4 17:33:35.334606 containerd[1460]: time="2024-09-04T17:33:35.334565975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:33:35.335555 containerd[1460]: time="2024-09-04T17:33:35.335504505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:33:35.346752 containerd[1460]: time="2024-09-04T17:33:35.346724046Z" level=info msg="CreateContainer within sandbox \"aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:33:35.363422 containerd[1460]: time="2024-09-04T17:33:35.363382131Z" level=info msg="CreateContainer within sandbox \"aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4\"" Sep 4 17:33:35.367394 containerd[1460]: time="2024-09-04T17:33:35.366799373Z" level=info msg="StartContainer for \"202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4\"" Sep 4 17:33:35.397080 systemd[1]: run-containerd-runc-k8s.io-202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4-runc.xKNUi8.mount: Deactivated successfully. Sep 4 17:33:35.404985 systemd[1]: Started cri-containerd-202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4.scope - libcontainer container 202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4. Sep 4 17:33:35.434759 containerd[1460]: time="2024-09-04T17:33:35.434725471Z" level=info msg="StartContainer for \"202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4\" returns successfully" Sep 4 17:33:35.444744 systemd[1]: cri-containerd-202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4.scope: Deactivated successfully. Sep 4 17:33:35.528080 containerd[1460]: time="2024-09-04T17:33:35.528015486Z" level=info msg="shim disconnected" id=202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4 namespace=k8s.io Sep 4 17:33:35.528080 containerd[1460]: time="2024-09-04T17:33:35.528072664Z" level=warning msg="cleaning up after shim disconnected" id=202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4 namespace=k8s.io Sep 4 17:33:35.528080 containerd[1460]: time="2024-09-04T17:33:35.528080969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:33:35.549502 containerd[1460]: time="2024-09-04T17:33:35.549438243Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:33:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:33:35.611855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-202c75681bd33a64772d6d9670137d835e807e3184e1d99719306279ff86c5d4-rootfs.mount: Deactivated successfully. Sep 4 17:33:35.773082 kubelet[2545]: E0904 17:33:35.772991 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m97v" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" Sep 4 17:33:35.817608 kubelet[2545]: E0904 17:33:35.817577 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:37.772053 kubelet[2545]: E0904 17:33:37.771997 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m97v" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" Sep 4 17:33:37.979052 containerd[1460]: time="2024-09-04T17:33:37.978983213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:37.979852 containerd[1460]: time="2024-09-04T17:33:37.979780094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:33:37.981012 containerd[1460]: time="2024-09-04T17:33:37.980981878Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:37.982936 containerd[1460]: time="2024-09-04T17:33:37.982894683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:37.983443 containerd[1460]: time="2024-09-04T17:33:37.983403481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.647870834s" Sep 4 17:33:37.983481 containerd[1460]: time="2024-09-04T17:33:37.983440881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:33:37.984379 containerd[1460]: time="2024-09-04T17:33:37.984357879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:33:37.992862 containerd[1460]: time="2024-09-04T17:33:37.992730124Z" level=info msg="CreateContainer within sandbox \"91dc76161c10e79f6b884136f311a8bb3ef1047250283425ac97252fb87d6fe2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:33:38.008334 containerd[1460]: time="2024-09-04T17:33:38.008178538Z" level=info msg="CreateContainer within sandbox \"91dc76161c10e79f6b884136f311a8bb3ef1047250283425ac97252fb87d6fe2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"639eef8eb02050f41ed1bc84f9dfcf24d1267eee15d7ddf0103fc7d8de40b0f3\"" Sep 4 17:33:38.009015 containerd[1460]: time="2024-09-04T17:33:38.008974107Z" level=info msg="StartContainer for \"639eef8eb02050f41ed1bc84f9dfcf24d1267eee15d7ddf0103fc7d8de40b0f3\"" Sep 4 17:33:38.043960 systemd[1]: Started cri-containerd-639eef8eb02050f41ed1bc84f9dfcf24d1267eee15d7ddf0103fc7d8de40b0f3.scope - libcontainer container 639eef8eb02050f41ed1bc84f9dfcf24d1267eee15d7ddf0103fc7d8de40b0f3. Sep 4 17:33:38.083938 containerd[1460]: time="2024-09-04T17:33:38.083869207Z" level=info msg="StartContainer for \"639eef8eb02050f41ed1bc84f9dfcf24d1267eee15d7ddf0103fc7d8de40b0f3\" returns successfully" Sep 4 17:33:38.823414 kubelet[2545]: E0904 17:33:38.823382 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:39.772265 kubelet[2545]: E0904 17:33:39.772204 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m97v" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" Sep 4 17:33:39.824098 kubelet[2545]: I0904 17:33:39.824057 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:33:39.824559 kubelet[2545]: E0904 17:33:39.824524 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:41.774137 kubelet[2545]: E0904 17:33:41.774092 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2m97v" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" Sep 4 17:33:42.561113 systemd[1]: Started sshd@7-10.0.0.157:22-10.0.0.1:45294.service - OpenSSH per-connection server daemon (10.0.0.1:45294). Sep 4 17:33:42.562179 containerd[1460]: time="2024-09-04T17:33:42.562131964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:42.564204 containerd[1460]: time="2024-09-04T17:33:42.564142608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:33:42.567110 containerd[1460]: time="2024-09-04T17:33:42.567083713Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:42.569602 containerd[1460]: time="2024-09-04T17:33:42.569559563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:42.570375 containerd[1460]: time="2024-09-04T17:33:42.570340653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.585955341s" Sep 4 17:33:42.570375 containerd[1460]: time="2024-09-04T17:33:42.570369978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:33:42.572447 containerd[1460]: time="2024-09-04T17:33:42.572419646Z" level=info msg="CreateContainer within sandbox \"aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:33:42.588169 containerd[1460]: time="2024-09-04T17:33:42.588118892Z" level=info msg="CreateContainer within sandbox \"aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e\"" Sep 4 17:33:42.588730 containerd[1460]: time="2024-09-04T17:33:42.588688634Z" level=info msg="StartContainer for \"ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e\"" Sep 4 17:33:42.600011 sshd[3197]: Accepted publickey for core from 10.0.0.1 port 45294 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:33:42.601993 sshd[3197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:42.607753 systemd-logind[1444]: New session 8 of user core. Sep 4 17:33:42.615951 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:33:42.621434 systemd[1]: Started cri-containerd-ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e.scope - libcontainer container ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e. Sep 4 17:33:42.651296 containerd[1460]: time="2024-09-04T17:33:42.651249517Z" level=info msg="StartContainer for \"ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e\" returns successfully" Sep 4 17:33:42.736288 sshd[3197]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:42.740923 systemd[1]: sshd@7-10.0.0.157:22-10.0.0.1:45294.service: Deactivated successfully. Sep 4 17:33:42.743506 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:33:42.744218 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:33:42.745688 systemd-logind[1444]: Removed session 8. Sep 4 17:33:42.830722 kubelet[2545]: E0904 17:33:42.830579 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:43.015318 kubelet[2545]: I0904 17:33:43.015258 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c6b84b55c-hjggt" podStartSLOduration=5.960616105 podStartE2EDuration="10.015242142s" podCreationTimestamp="2024-09-04 17:33:33 +0000 UTC" firstStartedPulling="2024-09-04 17:33:33.929588322 +0000 UTC m=+22.229478975" lastFinishedPulling="2024-09-04 17:33:37.984214359 +0000 UTC m=+26.284105012" observedRunningTime="2024-09-04 17:33:38.830767198 +0000 UTC m=+27.130657851" watchObservedRunningTime="2024-09-04 17:33:43.015242142 +0000 UTC m=+31.315132795" Sep 4 17:33:43.551054 systemd[1]: cri-containerd-ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e.scope: Deactivated successfully. Sep 4 17:33:43.564239 kubelet[2545]: I0904 17:33:43.564211 2545 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:33:43.572473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e-rootfs.mount: Deactivated successfully. Sep 4 17:33:43.584534 kubelet[2545]: I0904 17:33:43.583101 2545 topology_manager.go:215] "Topology Admit Handler" podUID="72f5b034-8e53-4dda-a746-9a05ab61c7bd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hjz5r" Sep 4 17:33:43.591441 kubelet[2545]: I0904 17:33:43.588462 2545 topology_manager.go:215] "Topology Admit Handler" podUID="66bd0464-0844-44e1-8cd9-a36b4d73396c" podNamespace="calico-system" podName="calico-kube-controllers-6d8b6c85-kqmww" Sep 4 17:33:43.591441 kubelet[2545]: I0904 17:33:43.588575 2545 topology_manager.go:215] "Topology Admit Handler" podUID="302bd635-ccd1-4c46-9368-eb8aaa152294" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2nnsb" Sep 4 17:33:43.601231 systemd[1]: Created slice kubepods-burstable-pod72f5b034_8e53_4dda_a746_9a05ab61c7bd.slice - libcontainer container kubepods-burstable-pod72f5b034_8e53_4dda_a746_9a05ab61c7bd.slice. Sep 4 17:33:43.609483 systemd[1]: Created slice kubepods-burstable-pod302bd635_ccd1_4c46_9368_eb8aaa152294.slice - libcontainer container kubepods-burstable-pod302bd635_ccd1_4c46_9368_eb8aaa152294.slice. Sep 4 17:33:43.621528 systemd[1]: Created slice kubepods-besteffort-pod66bd0464_0844_44e1_8cd9_a36b4d73396c.slice - libcontainer container kubepods-besteffort-pod66bd0464_0844_44e1_8cd9_a36b4d73396c.slice. Sep 4 17:33:43.776575 kubelet[2545]: I0904 17:33:43.776541 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/302bd635-ccd1-4c46-9368-eb8aaa152294-config-volume\") pod \"coredns-7db6d8ff4d-2nnsb\" (UID: \"302bd635-ccd1-4c46-9368-eb8aaa152294\") " pod="kube-system/coredns-7db6d8ff4d-2nnsb" Sep 4 17:33:43.776575 kubelet[2545]: I0904 17:33:43.776582 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72f5b034-8e53-4dda-a746-9a05ab61c7bd-config-volume\") pod \"coredns-7db6d8ff4d-hjz5r\" (UID: \"72f5b034-8e53-4dda-a746-9a05ab61c7bd\") " pod="kube-system/coredns-7db6d8ff4d-hjz5r" Sep 4 17:33:43.776748 kubelet[2545]: I0904 17:33:43.776605 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfwj\" (UniqueName: \"kubernetes.io/projected/72f5b034-8e53-4dda-a746-9a05ab61c7bd-kube-api-access-vbfwj\") pod \"coredns-7db6d8ff4d-hjz5r\" (UID: \"72f5b034-8e53-4dda-a746-9a05ab61c7bd\") " pod="kube-system/coredns-7db6d8ff4d-hjz5r" Sep 4 17:33:43.776748 kubelet[2545]: I0904 17:33:43.776695 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjhf\" (UniqueName: \"kubernetes.io/projected/302bd635-ccd1-4c46-9368-eb8aaa152294-kube-api-access-ptjhf\") pod \"coredns-7db6d8ff4d-2nnsb\" (UID: \"302bd635-ccd1-4c46-9368-eb8aaa152294\") " pod="kube-system/coredns-7db6d8ff4d-2nnsb" Sep 4 17:33:43.776748 kubelet[2545]: I0904 17:33:43.776723 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66bd0464-0844-44e1-8cd9-a36b4d73396c-tigera-ca-bundle\") pod \"calico-kube-controllers-6d8b6c85-kqmww\" (UID: \"66bd0464-0844-44e1-8cd9-a36b4d73396c\") " pod="calico-system/calico-kube-controllers-6d8b6c85-kqmww" Sep 4 17:33:43.776902 kubelet[2545]: I0904 17:33:43.776798 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpf5g\" (UniqueName: \"kubernetes.io/projected/66bd0464-0844-44e1-8cd9-a36b4d73396c-kube-api-access-tpf5g\") pod \"calico-kube-controllers-6d8b6c85-kqmww\" (UID: \"66bd0464-0844-44e1-8cd9-a36b4d73396c\") " pod="calico-system/calico-kube-controllers-6d8b6c85-kqmww" Sep 4 17:33:43.777520 systemd[1]: Created slice kubepods-besteffort-podb3ad58bb_75f5_444f_ace4_e9ea2e8aac02.slice - libcontainer container kubepods-besteffort-podb3ad58bb_75f5_444f_ace4_e9ea2e8aac02.slice. Sep 4 17:33:43.779303 containerd[1460]: time="2024-09-04T17:33:43.779273014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m97v,Uid:b3ad58bb-75f5-444f-ace4-e9ea2e8aac02,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:43.831624 kubelet[2545]: E0904 17:33:43.831604 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:44.012281 containerd[1460]: time="2024-09-04T17:33:44.012200088Z" level=info msg="shim disconnected" id=ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e namespace=k8s.io Sep 4 17:33:44.012281 containerd[1460]: time="2024-09-04T17:33:44.012260161Z" level=warning msg="cleaning up after shim disconnected" id=ee8697a59be541c0a0b41e78076726a2fe8a4df4627b97b58f982d1e700aac9e namespace=k8s.io Sep 4 17:33:44.012281 containerd[1460]: time="2024-09-04T17:33:44.012272584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:33:44.079893 containerd[1460]: time="2024-09-04T17:33:44.079831597Z" level=error msg="Failed to destroy network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.080284 containerd[1460]: time="2024-09-04T17:33:44.080256767Z" level=error msg="encountered an error cleaning up failed sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.080330 containerd[1460]: time="2024-09-04T17:33:44.080310578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m97v,Uid:b3ad58bb-75f5-444f-ace4-e9ea2e8aac02,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.080579 kubelet[2545]: E0904 17:33:44.080521 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.080627 kubelet[2545]: E0904 17:33:44.080602 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2m97v" Sep 4 17:33:44.080650 kubelet[2545]: E0904 17:33:44.080626 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2m97v" Sep 4 17:33:44.080704 kubelet[2545]: E0904 17:33:44.080675 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2m97v_calico-system(b3ad58bb-75f5-444f-ace4-e9ea2e8aac02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2m97v_calico-system(b3ad58bb-75f5-444f-ace4-e9ea2e8aac02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2m97v" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" Sep 4 17:33:44.082022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c-shm.mount: Deactivated successfully. Sep 4 17:33:44.208675 kubelet[2545]: E0904 17:33:44.208618 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:44.209156 containerd[1460]: time="2024-09-04T17:33:44.209112341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjz5r,Uid:72f5b034-8e53-4dda-a746-9a05ab61c7bd,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:44.216913 kubelet[2545]: E0904 17:33:44.216890 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:44.217280 containerd[1460]: time="2024-09-04T17:33:44.217246535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nnsb,Uid:302bd635-ccd1-4c46-9368-eb8aaa152294,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:44.228372 containerd[1460]: time="2024-09-04T17:33:44.228337791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8b6c85-kqmww,Uid:66bd0464-0844-44e1-8cd9-a36b4d73396c,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:44.282605 containerd[1460]: time="2024-09-04T17:33:44.282551907Z" level=error msg="Failed to destroy network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.283168 containerd[1460]: time="2024-09-04T17:33:44.282974173Z" level=error msg="encountered an error cleaning up failed sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.283168 containerd[1460]: time="2024-09-04T17:33:44.283019638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjz5r,Uid:72f5b034-8e53-4dda-a746-9a05ab61c7bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.283270 kubelet[2545]: E0904 17:33:44.283196 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.283270 kubelet[2545]: E0904 17:33:44.283246 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hjz5r" Sep 4 17:33:44.283270 kubelet[2545]: E0904 17:33:44.283265 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hjz5r" Sep 4 17:33:44.283384 kubelet[2545]: E0904 17:33:44.283304 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hjz5r_kube-system(72f5b034-8e53-4dda-a746-9a05ab61c7bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hjz5r_kube-system(72f5b034-8e53-4dda-a746-9a05ab61c7bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hjz5r" podUID="72f5b034-8e53-4dda-a746-9a05ab61c7bd" Sep 4 17:33:44.289866 containerd[1460]: time="2024-09-04T17:33:44.289807778Z" level=error msg="Failed to destroy network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.290486 containerd[1460]: time="2024-09-04T17:33:44.290174589Z" level=error msg="encountered an error cleaning up failed sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.290486 containerd[1460]: time="2024-09-04T17:33:44.290220916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nnsb,Uid:302bd635-ccd1-4c46-9368-eb8aaa152294,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.290544 kubelet[2545]: E0904 17:33:44.290370 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.290544 kubelet[2545]: E0904 17:33:44.290420 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nnsb" Sep 4 17:33:44.290544 kubelet[2545]: E0904 17:33:44.290441 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2nnsb" Sep 4 17:33:44.290631 kubelet[2545]: E0904 17:33:44.290474 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2nnsb_kube-system(302bd635-ccd1-4c46-9368-eb8aaa152294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2nnsb_kube-system(302bd635-ccd1-4c46-9368-eb8aaa152294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2nnsb" podUID="302bd635-ccd1-4c46-9368-eb8aaa152294" Sep 4 17:33:44.295854 containerd[1460]: time="2024-09-04T17:33:44.295805312Z" level=error msg="Failed to destroy network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.296179 containerd[1460]: time="2024-09-04T17:33:44.296142677Z" level=error msg="encountered an error cleaning up failed sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.296214 containerd[1460]: time="2024-09-04T17:33:44.296197210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8b6c85-kqmww,Uid:66bd0464-0844-44e1-8cd9-a36b4d73396c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.296376 kubelet[2545]: E0904 17:33:44.296342 2545 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.296417 kubelet[2545]: E0904 17:33:44.296387 2545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d8b6c85-kqmww" Sep 4 17:33:44.296417 kubelet[2545]: E0904 17:33:44.296406 2545 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d8b6c85-kqmww" Sep 4 17:33:44.296466 kubelet[2545]: E0904 17:33:44.296442 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d8b6c85-kqmww_calico-system(66bd0464-0844-44e1-8cd9-a36b4d73396c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d8b6c85-kqmww_calico-system(66bd0464-0844-44e1-8cd9-a36b4d73396c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d8b6c85-kqmww" podUID="66bd0464-0844-44e1-8cd9-a36b4d73396c" Sep 4 17:33:44.834057 kubelet[2545]: I0904 17:33:44.833999 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:33:44.835332 containerd[1460]: time="2024-09-04T17:33:44.834845854Z" level=info msg="StopPodSandbox for \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\"" Sep 4 17:33:44.835332 containerd[1460]: time="2024-09-04T17:33:44.835084603Z" level=info msg="Ensure that sandbox effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c in task-service has been cleanup successfully" Sep 4 17:33:44.836801 kubelet[2545]: I0904 17:33:44.835176 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:33:44.836801 kubelet[2545]: I0904 17:33:44.836300 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:33:44.836927 containerd[1460]: time="2024-09-04T17:33:44.835492991Z" level=info msg="StopPodSandbox for \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\"" Sep 4 17:33:44.836927 containerd[1460]: time="2024-09-04T17:33:44.835683320Z" level=info msg="Ensure that sandbox fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c in task-service has been cleanup successfully" Sep 4 17:33:44.838752 containerd[1460]: time="2024-09-04T17:33:44.837848954Z" level=info msg="StopPodSandbox for \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\"" Sep 4 17:33:44.838752 containerd[1460]: time="2024-09-04T17:33:44.838092863Z" level=info msg="Ensure that sandbox 6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657 in task-service has been cleanup successfully" Sep 4 17:33:44.840095 kubelet[2545]: I0904 17:33:44.839738 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:33:44.841595 containerd[1460]: time="2024-09-04T17:33:44.841497729Z" level=info msg="StopPodSandbox for \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\"" Sep 4 17:33:44.842418 containerd[1460]: time="2024-09-04T17:33:44.842394686Z" level=info msg="Ensure that sandbox 2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474 in task-service has been cleanup successfully" Sep 4 17:33:44.845231 kubelet[2545]: E0904 17:33:44.844943 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:44.846946 containerd[1460]: time="2024-09-04T17:33:44.846711778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:33:44.874079 containerd[1460]: time="2024-09-04T17:33:44.874016327Z" level=error msg="StopPodSandbox for \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\" failed" error="failed to destroy network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.874331 kubelet[2545]: E0904 17:33:44.874279 2545 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:33:44.874415 kubelet[2545]: E0904 17:33:44.874360 2545 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c"} Sep 4 17:33:44.874472 kubelet[2545]: E0904 17:33:44.874444 2545 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:44.874557 kubelet[2545]: E0904 17:33:44.874475 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2m97v" podUID="b3ad58bb-75f5-444f-ace4-e9ea2e8aac02" Sep 4 17:33:44.883534 containerd[1460]: time="2024-09-04T17:33:44.883474332Z" level=error msg="StopPodSandbox for \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\" failed" error="failed to destroy network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.883741 kubelet[2545]: E0904 17:33:44.883697 2545 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:33:44.883859 kubelet[2545]: E0904 17:33:44.883760 2545 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657"} Sep 4 17:33:44.883859 kubelet[2545]: E0904 17:33:44.883791 2545 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"302bd635-ccd1-4c46-9368-eb8aaa152294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:44.883859 kubelet[2545]: E0904 17:33:44.883814 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"302bd635-ccd1-4c46-9368-eb8aaa152294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2nnsb" podUID="302bd635-ccd1-4c46-9368-eb8aaa152294" Sep 4 17:33:44.885956 containerd[1460]: time="2024-09-04T17:33:44.885659944Z" level=error msg="StopPodSandbox for \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\" failed" error="failed to destroy network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.885998 kubelet[2545]: E0904 17:33:44.885885 2545 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:33:44.885998 kubelet[2545]: E0904 17:33:44.885930 2545 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c"} Sep 4 17:33:44.885998 kubelet[2545]: E0904 17:33:44.885963 2545 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66bd0464-0844-44e1-8cd9-a36b4d73396c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:44.886099 kubelet[2545]: E0904 17:33:44.885989 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66bd0464-0844-44e1-8cd9-a36b4d73396c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d8b6c85-kqmww" podUID="66bd0464-0844-44e1-8cd9-a36b4d73396c" Sep 4 17:33:44.888043 containerd[1460]: time="2024-09-04T17:33:44.887997272Z" level=error msg="StopPodSandbox for \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\" failed" error="failed to destroy network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:44.888182 kubelet[2545]: E0904 17:33:44.888159 2545 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:33:44.888242 kubelet[2545]: E0904 17:33:44.888183 2545 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474"} Sep 4 17:33:44.888242 kubelet[2545]: E0904 17:33:44.888203 2545 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72f5b034-8e53-4dda-a746-9a05ab61c7bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:44.888242 kubelet[2545]: E0904 17:33:44.888221 2545 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72f5b034-8e53-4dda-a746-9a05ab61c7bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hjz5r" podUID="72f5b034-8e53-4dda-a746-9a05ab61c7bd" Sep 4 17:33:47.750978 systemd[1]: Started sshd@8-10.0.0.157:22-10.0.0.1:44692.service - OpenSSH per-connection server daemon (10.0.0.1:44692). Sep 4 17:33:47.791046 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 44692 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:33:47.792916 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:47.797624 systemd-logind[1444]: New session 9 of user core. Sep 4 17:33:47.805030 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:33:47.946557 sshd[3520]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:47.950385 systemd[1]: sshd@8-10.0.0.157:22-10.0.0.1:44692.service: Deactivated successfully. Sep 4 17:33:47.952552 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:33:47.953507 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:33:47.954654 systemd-logind[1444]: Removed session 9. Sep 4 17:33:48.569104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762276560.mount: Deactivated successfully. Sep 4 17:33:49.753459 containerd[1460]: time="2024-09-04T17:33:49.753377211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:49.754488 containerd[1460]: time="2024-09-04T17:33:49.754436633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:33:49.755897 containerd[1460]: time="2024-09-04T17:33:49.755852946Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:49.758095 containerd[1460]: time="2024-09-04T17:33:49.758046400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:49.758563 containerd[1460]: time="2024-09-04T17:33:49.758517776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.911773286s" Sep 4 17:33:49.758563 containerd[1460]: time="2024-09-04T17:33:49.758555637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:33:49.766086 containerd[1460]: time="2024-09-04T17:33:49.766052351Z" level=info msg="CreateContainer within sandbox \"aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:33:49.782844 containerd[1460]: time="2024-09-04T17:33:49.782798594Z" level=info msg="CreateContainer within sandbox \"aabeec51bbbd3ebdbac77b18209f1f24d041f8824ad5a02c4c07483103dd8e51\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fa859c0469450ae9a2ad10561c00b44f014c9e8f65f22e5d6c27ebd5a2d896ce\"" Sep 4 17:33:49.783301 containerd[1460]: time="2024-09-04T17:33:49.783256775Z" level=info msg="StartContainer for \"fa859c0469450ae9a2ad10561c00b44f014c9e8f65f22e5d6c27ebd5a2d896ce\"" Sep 4 17:33:49.843987 systemd[1]: Started cri-containerd-fa859c0469450ae9a2ad10561c00b44f014c9e8f65f22e5d6c27ebd5a2d896ce.scope - libcontainer container fa859c0469450ae9a2ad10561c00b44f014c9e8f65f22e5d6c27ebd5a2d896ce. Sep 4 17:33:49.876237 containerd[1460]: time="2024-09-04T17:33:49.876195380Z" level=info msg="StartContainer for \"fa859c0469450ae9a2ad10561c00b44f014c9e8f65f22e5d6c27ebd5a2d896ce\" returns successfully" Sep 4 17:33:49.948309 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:33:49.948444 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:33:50.871328 kubelet[2545]: E0904 17:33:50.871299 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:50.884802 kubelet[2545]: I0904 17:33:50.884724 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5cp78" podStartSLOduration=2.041882266 podStartE2EDuration="17.884683665s" podCreationTimestamp="2024-09-04 17:33:33 +0000 UTC" firstStartedPulling="2024-09-04 17:33:33.916264848 +0000 UTC m=+22.216155501" lastFinishedPulling="2024-09-04 17:33:49.759066247 +0000 UTC m=+38.058956900" observedRunningTime="2024-09-04 17:33:50.884357391 +0000 UTC m=+39.184248044" watchObservedRunningTime="2024-09-04 17:33:50.884683665 +0000 UTC m=+39.184574318" Sep 4 17:33:51.299114 kubelet[2545]: I0904 17:33:51.299016 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:33:51.299647 kubelet[2545]: E0904 17:33:51.299629 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:51.873096 kubelet[2545]: E0904 17:33:51.873064 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:52.378849 kernel: bpftool[3765]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:33:52.607066 systemd-networkd[1397]: vxlan.calico: Link UP Sep 4 17:33:52.607078 systemd-networkd[1397]: vxlan.calico: Gained carrier Sep 4 17:33:52.960740 systemd[1]: Started sshd@9-10.0.0.157:22-10.0.0.1:44698.service - OpenSSH per-connection server daemon (10.0.0.1:44698). Sep 4 17:33:52.999270 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 44698 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:33:53.000777 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:53.004483 systemd-logind[1444]: New session 10 of user core. Sep 4 17:33:53.017944 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:33:53.144071 sshd[3837]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:53.155482 systemd[1]: sshd@9-10.0.0.157:22-10.0.0.1:44698.service: Deactivated successfully. Sep 4 17:33:53.157251 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:33:53.158548 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:33:53.159712 systemd[1]: Started sshd@10-10.0.0.157:22-10.0.0.1:44708.service - OpenSSH per-connection server daemon (10.0.0.1:44708). Sep 4 17:33:53.160441 systemd-logind[1444]: Removed session 10. Sep 4 17:33:53.194488 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 44708 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:33:53.195924 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:53.199734 systemd-logind[1444]: New session 11 of user core. Sep 4 17:33:53.208942 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:33:53.340495 sshd[3855]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:53.353597 systemd[1]: sshd@10-10.0.0.157:22-10.0.0.1:44708.service: Deactivated successfully. Sep 4 17:33:53.356293 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:33:53.357073 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:33:53.358312 systemd-logind[1444]: Removed session 11. Sep 4 17:33:53.371354 systemd[1]: Started sshd@11-10.0.0.157:22-10.0.0.1:44718.service - OpenSSH per-connection server daemon (10.0.0.1:44718). Sep 4 17:33:53.402646 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 44718 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:33:53.404107 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:53.408197 systemd-logind[1444]: New session 12 of user core. Sep 4 17:33:53.414942 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:33:53.514863 sshd[3869]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:53.518318 systemd[1]: sshd@11-10.0.0.157:22-10.0.0.1:44718.service: Deactivated successfully. Sep 4 17:33:53.520082 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:33:53.520673 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:33:53.521573 systemd-logind[1444]: Removed session 12. Sep 4 17:33:54.553999 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Sep 4 17:33:55.514920 kubelet[2545]: I0904 17:33:55.514861 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:33:55.515771 kubelet[2545]: E0904 17:33:55.515737 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:55.884155 kubelet[2545]: E0904 17:33:55.884121 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:56.772200 containerd[1460]: time="2024-09-04T17:33:56.772135214Z" level=info msg="StopPodSandbox for \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\"" Sep 4 17:33:56.772200 containerd[1460]: time="2024-09-04T17:33:56.772175319Z" level=info msg="StopPodSandbox for \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\"" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.880 [INFO][3960] k8s.go 608: Cleaning up netns ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.880 [INFO][3960] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" iface="eth0" netns="/var/run/netns/cni-b41a7910-b6f4-5ed9-09de-18522aaf59b8" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.880 [INFO][3960] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" iface="eth0" netns="/var/run/netns/cni-b41a7910-b6f4-5ed9-09de-18522aaf59b8" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.881 [INFO][3960] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" iface="eth0" netns="/var/run/netns/cni-b41a7910-b6f4-5ed9-09de-18522aaf59b8" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.881 [INFO][3960] k8s.go 615: Releasing IP address(es) ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.881 [INFO][3960] utils.go 188: Calico CNI releasing IP address ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.998 [INFO][3975] ipam_plugin.go 417: Releasing address using handleID ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.999 [INFO][3975] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:56.999 [INFO][3975] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:57.059 [WARNING][3975] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:57.059 [INFO][3975] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:57.060 [INFO][3975] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:57.065773 containerd[1460]: 2024-09-04 17:33:57.063 [INFO][3960] k8s.go 621: Teardown processing complete. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:33:57.068884 systemd[1]: run-netns-cni\x2db41a7910\x2db6f4\x2d5ed9\x2d09de\x2d18522aaf59b8.mount: Deactivated successfully. Sep 4 17:33:57.070081 containerd[1460]: time="2024-09-04T17:33:57.069765283Z" level=info msg="TearDown network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\" successfully" Sep 4 17:33:57.070081 containerd[1460]: time="2024-09-04T17:33:57.069802333Z" level=info msg="StopPodSandbox for \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\" returns successfully" Sep 4 17:33:57.070963 kubelet[2545]: E0904 17:33:57.070931 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:57.071992 containerd[1460]: time="2024-09-04T17:33:57.071347144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nnsb,Uid:302bd635-ccd1-4c46-9368-eb8aaa152294,Namespace:kube-system,Attempt:1,}" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.047 [INFO][3959] k8s.go 608: Cleaning up netns ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.047 [INFO][3959] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" iface="eth0" netns="/var/run/netns/cni-01ecf96f-bd89-7261-97f6-e0855ec4d1cc" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.047 [INFO][3959] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" iface="eth0" netns="/var/run/netns/cni-01ecf96f-bd89-7261-97f6-e0855ec4d1cc" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.047 [INFO][3959] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" iface="eth0" netns="/var/run/netns/cni-01ecf96f-bd89-7261-97f6-e0855ec4d1cc" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.048 [INFO][3959] k8s.go 615: Releasing IP address(es) ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.048 [INFO][3959] utils.go 188: Calico CNI releasing IP address ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.071 [INFO][3983] ipam_plugin.go 417: Releasing address using handleID ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.071 [INFO][3983] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.071 [INFO][3983] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.088 [WARNING][3983] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.088 [INFO][3983] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.105 [INFO][3983] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:57.110205 containerd[1460]: 2024-09-04 17:33:57.107 [INFO][3959] k8s.go 621: Teardown processing complete. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:33:57.110665 containerd[1460]: time="2024-09-04T17:33:57.110360243Z" level=info msg="TearDown network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\" successfully" Sep 4 17:33:57.110665 containerd[1460]: time="2024-09-04T17:33:57.110386843Z" level=info msg="StopPodSandbox for \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\" returns successfully" Sep 4 17:33:57.110714 kubelet[2545]: E0904 17:33:57.110688 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:57.111198 containerd[1460]: time="2024-09-04T17:33:57.111128466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjz5r,Uid:72f5b034-8e53-4dda-a746-9a05ab61c7bd,Namespace:kube-system,Attempt:1,}" Sep 4 17:33:57.113144 systemd[1]: run-netns-cni\x2d01ecf96f\x2dbd89\x2d7261\x2d97f6\x2de0855ec4d1cc.mount: Deactivated successfully. Sep 4 17:33:57.772759 containerd[1460]: time="2024-09-04T17:33:57.772519170Z" level=info msg="StopPodSandbox for \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\"" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.873 [INFO][4007] k8s.go 608: Cleaning up netns ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.873 [INFO][4007] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" iface="eth0" netns="/var/run/netns/cni-f9117c72-0829-06db-1fe9-f4c7d71d7fcf" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.873 [INFO][4007] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" iface="eth0" netns="/var/run/netns/cni-f9117c72-0829-06db-1fe9-f4c7d71d7fcf" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.874 [INFO][4007] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" iface="eth0" netns="/var/run/netns/cni-f9117c72-0829-06db-1fe9-f4c7d71d7fcf" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.874 [INFO][4007] k8s.go 615: Releasing IP address(es) ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.874 [INFO][4007] utils.go 188: Calico CNI releasing IP address ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.893 [INFO][4015] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.893 [INFO][4015] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.893 [INFO][4015] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.899 [WARNING][4015] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.900 [INFO][4015] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.901 [INFO][4015] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:57.906741 containerd[1460]: 2024-09-04 17:33:57.903 [INFO][4007] k8s.go 621: Teardown processing complete. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:33:57.907210 containerd[1460]: time="2024-09-04T17:33:57.906925319Z" level=info msg="TearDown network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\" successfully" Sep 4 17:33:57.907210 containerd[1460]: time="2024-09-04T17:33:57.906952711Z" level=info msg="StopPodSandbox for \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\" returns successfully" Sep 4 17:33:57.907733 containerd[1460]: time="2024-09-04T17:33:57.907686660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8b6c85-kqmww,Uid:66bd0464-0844-44e1-8cd9-a36b4d73396c,Namespace:calico-system,Attempt:1,}" Sep 4 17:33:57.909515 systemd[1]: run-netns-cni\x2df9117c72\x2d0829\x2d06db\x2d1fe9\x2df4c7d71d7fcf.mount: Deactivated successfully. Sep 4 17:33:58.041768 systemd-networkd[1397]: cali54f29dbde8b: Link UP Sep 4 17:33:58.046037 systemd-networkd[1397]: cali54f29dbde8b: Gained carrier Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:57.948 [INFO][4030] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0 coredns-7db6d8ff4d- kube-system 302bd635-ccd1-4c46-9368-eb8aaa152294 804 0 2024-09-04 17:33:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-2nnsb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali54f29dbde8b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:57.948 [INFO][4030] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:57.985 [INFO][4045] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" HandleID="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:57.997 [INFO][4045] ipam_plugin.go 270: Auto assigning IP ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" HandleID="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d9f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-2nnsb", "timestamp":"2024-09-04 17:33:57.985326662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:57.997 [INFO][4045] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:57.997 [INFO][4045] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:57.997 [INFO][4045] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.000 [INFO][4045] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.006 [INFO][4045] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.012 [INFO][4045] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.014 [INFO][4045] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.017 [INFO][4045] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.017 [INFO][4045] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.019 [INFO][4045] ipam.go 1685: Creating new handle: k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.022 [INFO][4045] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.028 [INFO][4045] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.028 [INFO][4045] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" host="localhost" Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.028 [INFO][4045] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:58.074766 containerd[1460]: 2024-09-04 17:33:58.028 [INFO][4045] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" HandleID="k8s-pod-network.0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:58.076164 containerd[1460]: 2024-09-04 17:33:58.033 [INFO][4030] k8s.go 386: Populated endpoint ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"302bd635-ccd1-4c46-9368-eb8aaa152294", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-2nnsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54f29dbde8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.076164 containerd[1460]: 2024-09-04 17:33:58.033 [INFO][4030] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:58.076164 containerd[1460]: 2024-09-04 17:33:58.033 [INFO][4030] dataplane_linux.go 68: Setting the host side veth name to cali54f29dbde8b ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:58.076164 containerd[1460]: 2024-09-04 17:33:58.045 [INFO][4030] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:58.076164 containerd[1460]: 2024-09-04 17:33:58.057 [INFO][4030] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"302bd635-ccd1-4c46-9368-eb8aaa152294", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a", Pod:"coredns-7db6d8ff4d-2nnsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54f29dbde8b", MAC:"ae:dd:58:38:70:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.076164 containerd[1460]: 2024-09-04 17:33:58.070 [INFO][4030] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2nnsb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:33:58.114121 systemd-networkd[1397]: cali69a22032bf6: Link UP Sep 4 17:33:58.116055 systemd-networkd[1397]: cali69a22032bf6: Gained carrier Sep 4 17:33:58.122377 containerd[1460]: time="2024-09-04T17:33:58.121926337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:58.122377 containerd[1460]: time="2024-09-04T17:33:58.121982442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:58.122377 containerd[1460]: time="2024-09-04T17:33:58.122003912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:58.122377 containerd[1460]: time="2024-09-04T17:33:58.122016105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.000 [INFO][4048] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0 coredns-7db6d8ff4d- kube-system 72f5b034-8e53-4dda-a746-9a05ab61c7bd 805 0 2024-09-04 17:33:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-hjz5r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69a22032bf6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.000 [INFO][4048] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.043 [INFO][4076] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" HandleID="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.072 [INFO][4076] ipam_plugin.go 270: Auto assigning IP ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" HandleID="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-hjz5r", "timestamp":"2024-09-04 17:33:58.043718268 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.072 [INFO][4076] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.072 [INFO][4076] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.073 [INFO][4076] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.076 [INFO][4076] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.080 [INFO][4076] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.084 [INFO][4076] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.087 [INFO][4076] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.091 [INFO][4076] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.091 [INFO][4076] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.092 [INFO][4076] ipam.go 1685: Creating new handle: k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.095 [INFO][4076] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.100 [INFO][4076] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.100 [INFO][4076] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" host="localhost" Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.101 [INFO][4076] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:58.133866 containerd[1460]: 2024-09-04 17:33:58.101 [INFO][4076] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" HandleID="k8s-pod-network.edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:58.134524 containerd[1460]: 2024-09-04 17:33:58.106 [INFO][4048] k8s.go 386: Populated endpoint ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"72f5b034-8e53-4dda-a746-9a05ab61c7bd", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-hjz5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a22032bf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.134524 containerd[1460]: 2024-09-04 17:33:58.106 [INFO][4048] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:58.134524 containerd[1460]: 2024-09-04 17:33:58.106 [INFO][4048] dataplane_linux.go 68: Setting the host side veth name to cali69a22032bf6 ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:58.134524 containerd[1460]: 2024-09-04 17:33:58.114 [INFO][4048] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:58.134524 containerd[1460]: 2024-09-04 17:33:58.114 [INFO][4048] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"72f5b034-8e53-4dda-a746-9a05ab61c7bd", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e", Pod:"coredns-7db6d8ff4d-hjz5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a22032bf6", MAC:"36:94:29:c0:6e:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.134524 containerd[1460]: 2024-09-04 17:33:58.131 [INFO][4048] k8s.go 500: Wrote updated endpoint to datastore ContainerID="edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hjz5r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:33:58.152001 systemd[1]: Started cri-containerd-0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a.scope - libcontainer container 0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a. Sep 4 17:33:58.175383 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:58.176670 containerd[1460]: time="2024-09-04T17:33:58.175950767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:58.176670 containerd[1460]: time="2024-09-04T17:33:58.176011131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:58.176670 containerd[1460]: time="2024-09-04T17:33:58.176035707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:58.176670 containerd[1460]: time="2024-09-04T17:33:58.176045675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:58.207440 systemd-networkd[1397]: calid2233885d00: Link UP Sep 4 17:33:58.207874 systemd-networkd[1397]: calid2233885d00: Gained carrier Sep 4 17:33:58.215573 systemd[1]: Started cri-containerd-edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e.scope - libcontainer container edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e. Sep 4 17:33:58.218851 containerd[1460]: time="2024-09-04T17:33:58.218744813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2nnsb,Uid:302bd635-ccd1-4c46-9368-eb8aaa152294,Namespace:kube-system,Attempt:1,} returns sandbox id \"0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a\"" Sep 4 17:33:58.219920 kubelet[2545]: E0904 17:33:58.219881 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:58.225273 containerd[1460]: time="2024-09-04T17:33:58.224513328Z" level=info msg="CreateContainer within sandbox \"0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.020 [INFO][4060] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0 calico-kube-controllers-6d8b6c85- calico-system 66bd0464-0844-44e1-8cd9-a36b4d73396c 813 0 2024-09-04 17:33:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d8b6c85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d8b6c85-kqmww eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid2233885d00 [] []}} ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.022 [INFO][4060] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.087 [INFO][4085] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" HandleID="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.103 [INFO][4085] ipam_plugin.go 270: Auto assigning IP ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" HandleID="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000280bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d8b6c85-kqmww", "timestamp":"2024-09-04 17:33:58.087206127 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.103 [INFO][4085] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.103 [INFO][4085] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.103 [INFO][4085] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.107 [INFO][4085] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.120 [INFO][4085] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.134 [INFO][4085] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.139 [INFO][4085] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.144 [INFO][4085] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.144 [INFO][4085] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.149 [INFO][4085] ipam.go 1685: Creating new handle: k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.154 [INFO][4085] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.167 [INFO][4085] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.167 [INFO][4085] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" host="localhost" Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.167 [INFO][4085] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:58.242355 containerd[1460]: 2024-09-04 17:33:58.167 [INFO][4085] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" HandleID="k8s-pod-network.2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:58.242960 containerd[1460]: 2024-09-04 17:33:58.194 [INFO][4060] k8s.go 386: Populated endpoint ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0", GenerateName:"calico-kube-controllers-6d8b6c85-", Namespace:"calico-system", SelfLink:"", UID:"66bd0464-0844-44e1-8cd9-a36b4d73396c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8b6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d8b6c85-kqmww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2233885d00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.242960 containerd[1460]: 2024-09-04 17:33:58.195 [INFO][4060] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:58.242960 containerd[1460]: 2024-09-04 17:33:58.196 [INFO][4060] dataplane_linux.go 68: Setting the host side veth name to calid2233885d00 ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:58.242960 containerd[1460]: 2024-09-04 17:33:58.209 [INFO][4060] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:58.242960 containerd[1460]: 2024-09-04 17:33:58.214 [INFO][4060] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0", GenerateName:"calico-kube-controllers-6d8b6c85-", Namespace:"calico-system", SelfLink:"", UID:"66bd0464-0844-44e1-8cd9-a36b4d73396c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8b6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e", Pod:"calico-kube-controllers-6d8b6c85-kqmww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2233885d00", MAC:"9e:c8:39:a7:bd:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.242960 containerd[1460]: 2024-09-04 17:33:58.238 [INFO][4060] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e" Namespace="calico-system" Pod="calico-kube-controllers-6d8b6c85-kqmww" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:33:58.250975 containerd[1460]: time="2024-09-04T17:33:58.250928898Z" level=info msg="CreateContainer within sandbox \"0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e71d599b8e62049aca86a1a308f0db54e627ac1105aa9fd033e59f0efec19a77\"" Sep 4 17:33:58.251801 containerd[1460]: time="2024-09-04T17:33:58.251731326Z" level=info msg="StartContainer for \"e71d599b8e62049aca86a1a308f0db54e627ac1105aa9fd033e59f0efec19a77\"" Sep 4 17:33:58.270131 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:58.285054 containerd[1460]: time="2024-09-04T17:33:58.283908247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:58.285054 containerd[1460]: time="2024-09-04T17:33:58.283961087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:58.285054 containerd[1460]: time="2024-09-04T17:33:58.283978810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:58.285054 containerd[1460]: time="2024-09-04T17:33:58.283991203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:58.298069 systemd[1]: Started cri-containerd-e71d599b8e62049aca86a1a308f0db54e627ac1105aa9fd033e59f0efec19a77.scope - libcontainer container e71d599b8e62049aca86a1a308f0db54e627ac1105aa9fd033e59f0efec19a77. Sep 4 17:33:58.312870 systemd[1]: Started cri-containerd-2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e.scope - libcontainer container 2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e. Sep 4 17:33:58.324302 containerd[1460]: time="2024-09-04T17:33:58.324119031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hjz5r,Uid:72f5b034-8e53-4dda-a746-9a05ab61c7bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e\"" Sep 4 17:33:58.325023 kubelet[2545]: E0904 17:33:58.325005 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:58.331674 containerd[1460]: time="2024-09-04T17:33:58.330080659Z" level=info msg="CreateContainer within sandbox \"edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:33:58.334052 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:58.344599 containerd[1460]: time="2024-09-04T17:33:58.344491549Z" level=info msg="StartContainer for \"e71d599b8e62049aca86a1a308f0db54e627ac1105aa9fd033e59f0efec19a77\" returns successfully" Sep 4 17:33:58.355884 containerd[1460]: time="2024-09-04T17:33:58.355680697Z" level=info msg="CreateContainer within sandbox \"edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e20c734ada142315f344a7c78c52541b1b5482ba1f9d0405c70b4000136ff63c\"" Sep 4 17:33:58.359545 containerd[1460]: time="2024-09-04T17:33:58.359509299Z" level=info msg="StartContainer for \"e20c734ada142315f344a7c78c52541b1b5482ba1f9d0405c70b4000136ff63c\"" Sep 4 17:33:58.363721 containerd[1460]: time="2024-09-04T17:33:58.363681837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8b6c85-kqmww,Uid:66bd0464-0844-44e1-8cd9-a36b4d73396c,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e\"" Sep 4 17:33:58.365880 containerd[1460]: time="2024-09-04T17:33:58.365799805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:33:58.391963 systemd[1]: Started cri-containerd-e20c734ada142315f344a7c78c52541b1b5482ba1f9d0405c70b4000136ff63c.scope - libcontainer container e20c734ada142315f344a7c78c52541b1b5482ba1f9d0405c70b4000136ff63c. Sep 4 17:33:58.421715 containerd[1460]: time="2024-09-04T17:33:58.421465830Z" level=info msg="StartContainer for \"e20c734ada142315f344a7c78c52541b1b5482ba1f9d0405c70b4000136ff63c\" returns successfully" Sep 4 17:33:58.529207 systemd[1]: Started sshd@12-10.0.0.157:22-10.0.0.1:58390.service - OpenSSH per-connection server daemon (10.0.0.1:58390). Sep 4 17:33:58.568685 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 58390 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:33:58.570158 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:58.574361 systemd-logind[1444]: New session 13 of user core. Sep 4 17:33:58.583944 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:33:58.703879 sshd[4347]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:58.708040 systemd[1]: sshd@12-10.0.0.157:22-10.0.0.1:58390.service: Deactivated successfully. Sep 4 17:33:58.710173 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:33:58.710863 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:33:58.711695 systemd-logind[1444]: Removed session 13. Sep 4 17:33:58.772169 containerd[1460]: time="2024-09-04T17:33:58.772112304Z" level=info msg="StopPodSandbox for \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\"" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.817 [INFO][4377] k8s.go 608: Cleaning up netns ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.817 [INFO][4377] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" iface="eth0" netns="/var/run/netns/cni-7d7dd8ed-d6bf-e2ad-83ab-9daf913b3e84" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.817 [INFO][4377] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" iface="eth0" netns="/var/run/netns/cni-7d7dd8ed-d6bf-e2ad-83ab-9daf913b3e84" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.817 [INFO][4377] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" iface="eth0" netns="/var/run/netns/cni-7d7dd8ed-d6bf-e2ad-83ab-9daf913b3e84" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.817 [INFO][4377] k8s.go 615: Releasing IP address(es) ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.817 [INFO][4377] utils.go 188: Calico CNI releasing IP address ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.837 [INFO][4384] ipam_plugin.go 417: Releasing address using handleID ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.837 [INFO][4384] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.837 [INFO][4384] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.842 [WARNING][4384] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.842 [INFO][4384] ipam_plugin.go 445: Releasing address using workloadID ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.843 [INFO][4384] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:58.848362 containerd[1460]: 2024-09-04 17:33:58.845 [INFO][4377] k8s.go 621: Teardown processing complete. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:33:58.849047 containerd[1460]: time="2024-09-04T17:33:58.848530299Z" level=info msg="TearDown network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\" successfully" Sep 4 17:33:58.849047 containerd[1460]: time="2024-09-04T17:33:58.848594020Z" level=info msg="StopPodSandbox for \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\" returns successfully" Sep 4 17:33:58.849262 containerd[1460]: time="2024-09-04T17:33:58.849229964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m97v,Uid:b3ad58bb-75f5-444f-ace4-e9ea2e8aac02,Namespace:calico-system,Attempt:1,}" Sep 4 17:33:58.898056 kubelet[2545]: E0904 17:33:58.897958 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:58.900032 kubelet[2545]: E0904 17:33:58.900007 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:58.909939 kubelet[2545]: I0904 17:33:58.909494 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2nnsb" podStartSLOduration=30.909473087 podStartE2EDuration="30.909473087s" podCreationTimestamp="2024-09-04 17:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:58.909085068 +0000 UTC m=+47.208975721" watchObservedRunningTime="2024-09-04 17:33:58.909473087 +0000 UTC m=+47.209363740" Sep 4 17:33:58.929786 kubelet[2545]: I0904 17:33:58.928424 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hjz5r" podStartSLOduration=30.928398396 podStartE2EDuration="30.928398396s" podCreationTimestamp="2024-09-04 17:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:58.928074107 +0000 UTC m=+47.227964760" watchObservedRunningTime="2024-09-04 17:33:58.928398396 +0000 UTC m=+47.228289059" Sep 4 17:33:58.963139 systemd-networkd[1397]: cali48ef8df6221: Link UP Sep 4 17:33:58.963328 systemd-networkd[1397]: cali48ef8df6221: Gained carrier Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.887 [INFO][4391] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2m97v-eth0 csi-node-driver- calico-system b3ad58bb-75f5-444f-ace4-e9ea2e8aac02 847 0 2024-09-04 17:33:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-2m97v eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali48ef8df6221 [] []}} ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.887 [INFO][4391] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.921 [INFO][4405] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" HandleID="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.935 [INFO][4405] ipam_plugin.go 270: Auto assigning IP ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" HandleID="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312d50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2m97v", "timestamp":"2024-09-04 17:33:58.921441097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.935 [INFO][4405] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.935 [INFO][4405] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.935 [INFO][4405] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.938 [INFO][4405] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.943 [INFO][4405] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.946 [INFO][4405] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.948 [INFO][4405] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.949 [INFO][4405] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.949 [INFO][4405] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.951 [INFO][4405] ipam.go 1685: Creating new handle: k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5 Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.953 [INFO][4405] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.956 [INFO][4405] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.957 [INFO][4405] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" host="localhost" Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.957 [INFO][4405] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:58.973217 containerd[1460]: 2024-09-04 17:33:58.957 [INFO][4405] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" HandleID="k8s-pod-network.332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.973713 containerd[1460]: 2024-09-04 17:33:58.960 [INFO][4391] k8s.go 386: Populated endpoint ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2m97v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2m97v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali48ef8df6221", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.973713 containerd[1460]: 2024-09-04 17:33:58.960 [INFO][4391] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.973713 containerd[1460]: 2024-09-04 17:33:58.960 [INFO][4391] dataplane_linux.go 68: Setting the host side veth name to cali48ef8df6221 ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.973713 containerd[1460]: 2024-09-04 17:33:58.964 [INFO][4391] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.973713 containerd[1460]: 2024-09-04 17:33:58.964 [INFO][4391] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2m97v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5", Pod:"csi-node-driver-2m97v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali48ef8df6221", MAC:"ba:bf:90:b3:89:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:58.973713 containerd[1460]: 2024-09-04 17:33:58.969 [INFO][4391] k8s.go 500: Wrote updated endpoint to datastore ContainerID="332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5" Namespace="calico-system" Pod="csi-node-driver-2m97v" WorkloadEndpoint="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:33:58.995603 containerd[1460]: time="2024-09-04T17:33:58.995454456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:58.995895 containerd[1460]: time="2024-09-04T17:33:58.995586845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:58.995895 containerd[1460]: time="2024-09-04T17:33:58.995646897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:58.995895 containerd[1460]: time="2024-09-04T17:33:58.995664320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:59.014987 systemd[1]: Started cri-containerd-332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5.scope - libcontainer container 332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5. Sep 4 17:33:59.026573 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:59.037234 containerd[1460]: time="2024-09-04T17:33:59.037192443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2m97v,Uid:b3ad58bb-75f5-444f-ace4-e9ea2e8aac02,Namespace:calico-system,Attempt:1,} returns sandbox id \"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5\"" Sep 4 17:33:59.072831 systemd[1]: run-containerd-runc-k8s.io-edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e-runc.I7Gpdg.mount: Deactivated successfully. Sep 4 17:33:59.073088 systemd[1]: run-netns-cni\x2d7d7dd8ed\x2dd6bf\x2de2ad\x2d83ab\x2d9daf913b3e84.mount: Deactivated successfully. Sep 4 17:33:59.225943 systemd-networkd[1397]: cali69a22032bf6: Gained IPv6LL Sep 4 17:33:59.868517 systemd-networkd[1397]: calid2233885d00: Gained IPv6LL Sep 4 17:33:59.905655 kubelet[2545]: E0904 17:33:59.905604 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:59.906404 kubelet[2545]: E0904 17:33:59.905898 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:00.058464 systemd-networkd[1397]: cali54f29dbde8b: Gained IPv6LL Sep 4 17:34:00.096151 containerd[1460]: time="2024-09-04T17:34:00.096107685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:00.096884 containerd[1460]: time="2024-09-04T17:34:00.096858165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:34:00.098044 containerd[1460]: time="2024-09-04T17:34:00.098014116Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:00.100032 containerd[1460]: time="2024-09-04T17:34:00.100009874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:00.100695 containerd[1460]: time="2024-09-04T17:34:00.100650217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 1.734772256s" Sep 4 17:34:00.100695 containerd[1460]: time="2024-09-04T17:34:00.100678861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:34:00.102160 containerd[1460]: time="2024-09-04T17:34:00.102060216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:34:00.109721 containerd[1460]: time="2024-09-04T17:34:00.109676350Z" level=info msg="CreateContainer within sandbox \"2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:34:00.123667 containerd[1460]: time="2024-09-04T17:34:00.123547353Z" level=info msg="CreateContainer within sandbox \"2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c0bdec361e53afca032ab77fac18fb4a376c96074aa7e294c48dc6b56bdbe6ca\"" Sep 4 17:34:00.124213 containerd[1460]: time="2024-09-04T17:34:00.124177566Z" level=info msg="StartContainer for \"c0bdec361e53afca032ab77fac18fb4a376c96074aa7e294c48dc6b56bdbe6ca\"" Sep 4 17:34:00.158950 systemd[1]: Started cri-containerd-c0bdec361e53afca032ab77fac18fb4a376c96074aa7e294c48dc6b56bdbe6ca.scope - libcontainer container c0bdec361e53afca032ab77fac18fb4a376c96074aa7e294c48dc6b56bdbe6ca. Sep 4 17:34:00.305076 containerd[1460]: time="2024-09-04T17:34:00.304992560Z" level=info msg="StartContainer for \"c0bdec361e53afca032ab77fac18fb4a376c96074aa7e294c48dc6b56bdbe6ca\" returns successfully" Sep 4 17:34:00.698009 systemd-networkd[1397]: cali48ef8df6221: Gained IPv6LL Sep 4 17:34:00.908003 kubelet[2545]: E0904 17:34:00.907974 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:00.915217 kubelet[2545]: I0904 17:34:00.914255 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d8b6c85-kqmww" podStartSLOduration=26.178112188 podStartE2EDuration="27.914239389s" podCreationTimestamp="2024-09-04 17:33:33 +0000 UTC" firstStartedPulling="2024-09-04 17:33:58.365316166 +0000 UTC m=+46.665206819" lastFinishedPulling="2024-09-04 17:34:00.101443377 +0000 UTC m=+48.401334020" observedRunningTime="2024-09-04 17:34:00.9138833 +0000 UTC m=+49.213773973" watchObservedRunningTime="2024-09-04 17:34:00.914239389 +0000 UTC m=+49.214130042" Sep 4 17:34:01.474858 containerd[1460]: time="2024-09-04T17:34:01.474807627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:01.475730 containerd[1460]: time="2024-09-04T17:34:01.475692169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:34:01.476917 containerd[1460]: time="2024-09-04T17:34:01.476878898Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:01.479495 containerd[1460]: time="2024-09-04T17:34:01.479468341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:01.480181 containerd[1460]: time="2024-09-04T17:34:01.480152335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.378059479s" Sep 4 17:34:01.480240 containerd[1460]: time="2024-09-04T17:34:01.480183494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:34:01.487269 containerd[1460]: time="2024-09-04T17:34:01.487238193Z" level=info msg="CreateContainer within sandbox \"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:34:01.509528 containerd[1460]: time="2024-09-04T17:34:01.509481037Z" level=info msg="CreateContainer within sandbox \"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"14e521d19d90acc786198161282ddc28abd4c6a1e4ace335350c48318a630299\"" Sep 4 17:34:01.510003 containerd[1460]: time="2024-09-04T17:34:01.509974765Z" level=info msg="StartContainer for \"14e521d19d90acc786198161282ddc28abd4c6a1e4ace335350c48318a630299\"" Sep 4 17:34:01.542954 systemd[1]: Started cri-containerd-14e521d19d90acc786198161282ddc28abd4c6a1e4ace335350c48318a630299.scope - libcontainer container 14e521d19d90acc786198161282ddc28abd4c6a1e4ace335350c48318a630299. Sep 4 17:34:01.570298 containerd[1460]: time="2024-09-04T17:34:01.570256037Z" level=info msg="StartContainer for \"14e521d19d90acc786198161282ddc28abd4c6a1e4ace335350c48318a630299\" returns successfully" Sep 4 17:34:01.572304 containerd[1460]: time="2024-09-04T17:34:01.571602045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:34:03.046129 containerd[1460]: time="2024-09-04T17:34:03.046069174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:03.046982 containerd[1460]: time="2024-09-04T17:34:03.046921996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:34:03.048140 containerd[1460]: time="2024-09-04T17:34:03.048112121Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:03.050290 containerd[1460]: time="2024-09-04T17:34:03.050259463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:03.050937 containerd[1460]: time="2024-09-04T17:34:03.050895758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.479254169s" Sep 4 17:34:03.050990 containerd[1460]: time="2024-09-04T17:34:03.050936985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:34:03.052967 containerd[1460]: time="2024-09-04T17:34:03.052936120Z" level=info msg="CreateContainer within sandbox \"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:34:03.067138 containerd[1460]: time="2024-09-04T17:34:03.067098054Z" level=info msg="CreateContainer within sandbox \"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2ed96eed7c1db26705bbd6123675c5cf6b895f1a5639e618909b7a68cfde14df\"" Sep 4 17:34:03.067554 containerd[1460]: time="2024-09-04T17:34:03.067515659Z" level=info msg="StartContainer for \"2ed96eed7c1db26705bbd6123675c5cf6b895f1a5639e618909b7a68cfde14df\"" Sep 4 17:34:03.101953 systemd[1]: Started cri-containerd-2ed96eed7c1db26705bbd6123675c5cf6b895f1a5639e618909b7a68cfde14df.scope - libcontainer container 2ed96eed7c1db26705bbd6123675c5cf6b895f1a5639e618909b7a68cfde14df. Sep 4 17:34:03.130040 containerd[1460]: time="2024-09-04T17:34:03.129990078Z" level=info msg="StartContainer for \"2ed96eed7c1db26705bbd6123675c5cf6b895f1a5639e618909b7a68cfde14df\" returns successfully" Sep 4 17:34:03.717053 systemd[1]: Started sshd@13-10.0.0.157:22-10.0.0.1:58402.service - OpenSSH per-connection server daemon (10.0.0.1:58402). Sep 4 17:34:03.764302 sshd[4632]: Accepted publickey for core from 10.0.0.1 port 58402 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:03.765939 sshd[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:03.769678 systemd-logind[1444]: New session 14 of user core. Sep 4 17:34:03.774944 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:34:03.841692 kubelet[2545]: I0904 17:34:03.841659 2545 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:34:03.841692 kubelet[2545]: I0904 17:34:03.841697 2545 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:34:03.906544 sshd[4632]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:03.910427 systemd[1]: sshd@13-10.0.0.157:22-10.0.0.1:58402.service: Deactivated successfully. Sep 4 17:34:03.912842 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:34:03.913546 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:34:03.914484 systemd-logind[1444]: Removed session 14. Sep 4 17:34:03.939414 kubelet[2545]: I0904 17:34:03.938950 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2m97v" podStartSLOduration=26.925515745 podStartE2EDuration="30.938936489s" podCreationTimestamp="2024-09-04 17:33:33 +0000 UTC" firstStartedPulling="2024-09-04 17:33:59.03828727 +0000 UTC m=+47.338177923" lastFinishedPulling="2024-09-04 17:34:03.051708014 +0000 UTC m=+51.351598667" observedRunningTime="2024-09-04 17:34:03.938644521 +0000 UTC m=+52.238535174" watchObservedRunningTime="2024-09-04 17:34:03.938936489 +0000 UTC m=+52.238827142" Sep 4 17:34:08.927799 systemd[1]: Started sshd@14-10.0.0.157:22-10.0.0.1:37518.service - OpenSSH per-connection server daemon (10.0.0.1:37518). Sep 4 17:34:08.974361 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 37518 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:08.975857 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:08.979514 systemd-logind[1444]: New session 15 of user core. Sep 4 17:34:08.990022 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:34:09.044709 kubelet[2545]: I0904 17:34:09.044333 2545 topology_manager.go:215] "Topology Admit Handler" podUID="602c357e-dfdd-4cca-a21a-65038b8d3f05" podNamespace="calico-apiserver" podName="calico-apiserver-5b55889d56-mczjr" Sep 4 17:34:09.057646 systemd[1]: Created slice kubepods-besteffort-pod602c357e_dfdd_4cca_a21a_65038b8d3f05.slice - libcontainer container kubepods-besteffort-pod602c357e_dfdd_4cca_a21a_65038b8d3f05.slice. Sep 4 17:34:09.122691 kubelet[2545]: I0904 17:34:09.122564 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d79tj\" (UniqueName: \"kubernetes.io/projected/602c357e-dfdd-4cca-a21a-65038b8d3f05-kube-api-access-d79tj\") pod \"calico-apiserver-5b55889d56-mczjr\" (UID: \"602c357e-dfdd-4cca-a21a-65038b8d3f05\") " pod="calico-apiserver/calico-apiserver-5b55889d56-mczjr" Sep 4 17:34:09.122691 kubelet[2545]: I0904 17:34:09.122611 2545 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/602c357e-dfdd-4cca-a21a-65038b8d3f05-calico-apiserver-certs\") pod \"calico-apiserver-5b55889d56-mczjr\" (UID: \"602c357e-dfdd-4cca-a21a-65038b8d3f05\") " pod="calico-apiserver/calico-apiserver-5b55889d56-mczjr" Sep 4 17:34:09.147648 sshd[4649]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:09.155710 systemd[1]: sshd@14-10.0.0.157:22-10.0.0.1:37518.service: Deactivated successfully. Sep 4 17:34:09.158162 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:34:09.159930 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:34:09.161138 systemd-logind[1444]: Removed session 15. Sep 4 17:34:09.225576 kubelet[2545]: E0904 17:34:09.225411 2545 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:34:09.225576 kubelet[2545]: E0904 17:34:09.225512 2545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/602c357e-dfdd-4cca-a21a-65038b8d3f05-calico-apiserver-certs podName:602c357e-dfdd-4cca-a21a-65038b8d3f05 nodeName:}" failed. No retries permitted until 2024-09-04 17:34:09.725491694 +0000 UTC m=+58.025382347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/602c357e-dfdd-4cca-a21a-65038b8d3f05-calico-apiserver-certs") pod "calico-apiserver-5b55889d56-mczjr" (UID: "602c357e-dfdd-4cca-a21a-65038b8d3f05") : secret "calico-apiserver-certs" not found Sep 4 17:34:09.727846 kubelet[2545]: E0904 17:34:09.727794 2545 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:34:09.728008 kubelet[2545]: E0904 17:34:09.727891 2545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/602c357e-dfdd-4cca-a21a-65038b8d3f05-calico-apiserver-certs podName:602c357e-dfdd-4cca-a21a-65038b8d3f05 nodeName:}" failed. No retries permitted until 2024-09-04 17:34:10.727873603 +0000 UTC m=+59.027764266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/602c357e-dfdd-4cca-a21a-65038b8d3f05-calico-apiserver-certs") pod "calico-apiserver-5b55889d56-mczjr" (UID: "602c357e-dfdd-4cca-a21a-65038b8d3f05") : secret "calico-apiserver-certs" not found Sep 4 17:34:10.867839 containerd[1460]: time="2024-09-04T17:34:10.867780844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b55889d56-mczjr,Uid:602c357e-dfdd-4cca-a21a-65038b8d3f05,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:34:10.989275 systemd-networkd[1397]: calif932c5d4aef: Link UP Sep 4 17:34:10.990062 systemd-networkd[1397]: calif932c5d4aef: Gained carrier Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.932 [INFO][4671] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0 calico-apiserver-5b55889d56- calico-apiserver 602c357e-dfdd-4cca-a21a-65038b8d3f05 973 0 2024-09-04 17:34:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b55889d56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b55889d56-mczjr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif932c5d4aef [] []}} ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.932 [INFO][4671] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.957 [INFO][4684] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" HandleID="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Workload="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.965 [INFO][4684] ipam_plugin.go 270: Auto assigning IP ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" HandleID="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Workload="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000615e60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b55889d56-mczjr", "timestamp":"2024-09-04 17:34:10.957297467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.965 [INFO][4684] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.965 [INFO][4684] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.965 [INFO][4684] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.966 [INFO][4684] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.969 [INFO][4684] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.972 [INFO][4684] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.973 [INFO][4684] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.975 [INFO][4684] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.975 [INFO][4684] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.976 [INFO][4684] ipam.go 1685: Creating new handle: k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377 Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.979 [INFO][4684] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.984 [INFO][4684] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.984 [INFO][4684] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" host="localhost" Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.984 [INFO][4684] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:11.000701 containerd[1460]: 2024-09-04 17:34:10.984 [INFO][4684] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" HandleID="k8s-pod-network.995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Workload="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" Sep 4 17:34:11.002291 containerd[1460]: 2024-09-04 17:34:10.987 [INFO][4671] k8s.go 386: Populated endpoint ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0", GenerateName:"calico-apiserver-5b55889d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"602c357e-dfdd-4cca-a21a-65038b8d3f05", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 34, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b55889d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b55889d56-mczjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif932c5d4aef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:11.002291 containerd[1460]: 2024-09-04 17:34:10.987 [INFO][4671] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" Sep 4 17:34:11.002291 containerd[1460]: 2024-09-04 17:34:10.987 [INFO][4671] dataplane_linux.go 68: Setting the host side veth name to calif932c5d4aef ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" Sep 4 17:34:11.002291 containerd[1460]: 2024-09-04 17:34:10.989 [INFO][4671] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" Sep 4 17:34:11.002291 containerd[1460]: 2024-09-04 17:34:10.990 [INFO][4671] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0", GenerateName:"calico-apiserver-5b55889d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"602c357e-dfdd-4cca-a21a-65038b8d3f05", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 34, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b55889d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377", Pod:"calico-apiserver-5b55889d56-mczjr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif932c5d4aef", MAC:"a2:a9:de:b0:c3:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:11.002291 containerd[1460]: 2024-09-04 17:34:10.997 [INFO][4671] k8s.go 500: Wrote updated endpoint to datastore ContainerID="995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377" Namespace="calico-apiserver" Pod="calico-apiserver-5b55889d56-mczjr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b55889d56--mczjr-eth0" Sep 4 17:34:11.247538 containerd[1460]: time="2024-09-04T17:34:11.245985192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:34:11.248258 containerd[1460]: time="2024-09-04T17:34:11.246073577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:11.248258 containerd[1460]: time="2024-09-04T17:34:11.246090329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:34:11.248258 containerd[1460]: time="2024-09-04T17:34:11.246114634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:11.275049 systemd[1]: Started cri-containerd-995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377.scope - libcontainer container 995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377. Sep 4 17:34:11.286888 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:34:11.312337 containerd[1460]: time="2024-09-04T17:34:11.312276760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b55889d56-mczjr,Uid:602c357e-dfdd-4cca-a21a-65038b8d3f05,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377\"" Sep 4 17:34:11.314094 containerd[1460]: time="2024-09-04T17:34:11.314068484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:34:11.755862 containerd[1460]: time="2024-09-04T17:34:11.755783883Z" level=info msg="StopPodSandbox for \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\"" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.789 [WARNING][4763] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0", GenerateName:"calico-kube-controllers-6d8b6c85-", Namespace:"calico-system", SelfLink:"", UID:"66bd0464-0844-44e1-8cd9-a36b4d73396c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8b6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e", Pod:"calico-kube-controllers-6d8b6c85-kqmww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2233885d00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.790 [INFO][4763] k8s.go 608: Cleaning up netns ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.790 [INFO][4763] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" iface="eth0" netns="" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.790 [INFO][4763] k8s.go 615: Releasing IP address(es) ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.790 [INFO][4763] utils.go 188: Calico CNI releasing IP address ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.809 [INFO][4772] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.809 [INFO][4772] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.809 [INFO][4772] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.814 [WARNING][4772] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.814 [INFO][4772] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.815 [INFO][4772] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:11.820589 containerd[1460]: 2024-09-04 17:34:11.818 [INFO][4763] k8s.go 621: Teardown processing complete. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.821022 containerd[1460]: time="2024-09-04T17:34:11.820621783Z" level=info msg="TearDown network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\" successfully" Sep 4 17:34:11.821022 containerd[1460]: time="2024-09-04T17:34:11.820644175Z" level=info msg="StopPodSandbox for \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\" returns successfully" Sep 4 17:34:11.821261 containerd[1460]: time="2024-09-04T17:34:11.821198546Z" level=info msg="RemovePodSandbox for \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\"" Sep 4 17:34:11.823670 containerd[1460]: time="2024-09-04T17:34:11.823646201Z" level=info msg="Forcibly stopping sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\"" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.871 [WARNING][4795] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0", GenerateName:"calico-kube-controllers-6d8b6c85-", Namespace:"calico-system", SelfLink:"", UID:"66bd0464-0844-44e1-8cd9-a36b4d73396c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8b6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e0e4be99968bd3aff5ae6d9a3960fc046d44ea434e2381f9bcf80193f08eb2e", Pod:"calico-kube-controllers-6d8b6c85-kqmww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2233885d00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.872 [INFO][4795] k8s.go 608: Cleaning up netns ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.872 [INFO][4795] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" iface="eth0" netns="" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.872 [INFO][4795] k8s.go 615: Releasing IP address(es) ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.872 [INFO][4795] utils.go 188: Calico CNI releasing IP address ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.888 [INFO][4802] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.888 [INFO][4802] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.888 [INFO][4802] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.893 [WARNING][4802] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.894 [INFO][4802] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" HandleID="k8s-pod-network.fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Workload="localhost-k8s-calico--kube--controllers--6d8b6c85--kqmww-eth0" Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.895 [INFO][4802] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:11.899146 containerd[1460]: 2024-09-04 17:34:11.897 [INFO][4795] k8s.go 621: Teardown processing complete. ContainerID="fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c" Sep 4 17:34:11.899901 containerd[1460]: time="2024-09-04T17:34:11.899178567Z" level=info msg="TearDown network for sandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\" successfully" Sep 4 17:34:11.919563 containerd[1460]: time="2024-09-04T17:34:11.919502151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:34:11.929705 containerd[1460]: time="2024-09-04T17:34:11.929662794Z" level=info msg="RemovePodSandbox \"fb2a19313fbba2b97d56ce50c4ce4a42e1b220e308ece3a6fb2fd0c10c36897c\" returns successfully" Sep 4 17:34:11.935093 containerd[1460]: time="2024-09-04T17:34:11.935043797Z" level=info msg="StopPodSandbox for \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\"" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.967 [WARNING][4824] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2m97v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5", Pod:"csi-node-driver-2m97v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali48ef8df6221", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.968 [INFO][4824] k8s.go 608: Cleaning up netns ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.968 [INFO][4824] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" iface="eth0" netns="" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.968 [INFO][4824] k8s.go 615: Releasing IP address(es) ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.968 [INFO][4824] utils.go 188: Calico CNI releasing IP address ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.986 [INFO][4832] ipam_plugin.go 417: Releasing address using handleID ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.986 [INFO][4832] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.987 [INFO][4832] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.991 [WARNING][4832] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.991 [INFO][4832] ipam_plugin.go 445: Releasing address using workloadID ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.992 [INFO][4832] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:11.997775 containerd[1460]: 2024-09-04 17:34:11.995 [INFO][4824] k8s.go 621: Teardown processing complete. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:11.998305 containerd[1460]: time="2024-09-04T17:34:11.997805548Z" level=info msg="TearDown network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\" successfully" Sep 4 17:34:11.998305 containerd[1460]: time="2024-09-04T17:34:11.997851816Z" level=info msg="StopPodSandbox for \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\" returns successfully" Sep 4 17:34:11.998374 containerd[1460]: time="2024-09-04T17:34:11.998326797Z" level=info msg="RemovePodSandbox for \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\"" Sep 4 17:34:11.998374 containerd[1460]: time="2024-09-04T17:34:11.998364067Z" level=info msg="Forcibly stopping sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\"" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.028 [WARNING][4854] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2m97v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3ad58bb-75f5-444f-ace4-e9ea2e8aac02", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"332919a68eac2cd17f776ff46f26eab465f0f73db64244e9db80d4512138e1c5", Pod:"csi-node-driver-2m97v", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali48ef8df6221", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.028 [INFO][4854] k8s.go 608: Cleaning up netns ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.028 [INFO][4854] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" iface="eth0" netns="" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.028 [INFO][4854] k8s.go 615: Releasing IP address(es) ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.028 [INFO][4854] utils.go 188: Calico CNI releasing IP address ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.048 [INFO][4862] ipam_plugin.go 417: Releasing address using handleID ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.048 [INFO][4862] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.048 [INFO][4862] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.052 [WARNING][4862] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.052 [INFO][4862] ipam_plugin.go 445: Releasing address using workloadID ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" HandleID="k8s-pod-network.effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Workload="localhost-k8s-csi--node--driver--2m97v-eth0" Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.053 [INFO][4862] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:12.058149 containerd[1460]: 2024-09-04 17:34:12.055 [INFO][4854] k8s.go 621: Teardown processing complete. ContainerID="effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c" Sep 4 17:34:12.058149 containerd[1460]: time="2024-09-04T17:34:12.058018853Z" level=info msg="TearDown network for sandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\" successfully" Sep 4 17:34:12.062523 containerd[1460]: time="2024-09-04T17:34:12.062484176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:34:12.062583 containerd[1460]: time="2024-09-04T17:34:12.062532457Z" level=info msg="RemovePodSandbox \"effad3b070d2eaef044ec599a879c84020d215325c30788c68f6c05d36ad4b0c\" returns successfully" Sep 4 17:34:12.062972 containerd[1460]: time="2024-09-04T17:34:12.062944911Z" level=info msg="StopPodSandbox for \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\"" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.094 [WARNING][4885] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"302bd635-ccd1-4c46-9368-eb8aaa152294", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a", Pod:"coredns-7db6d8ff4d-2nnsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54f29dbde8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.094 [INFO][4885] k8s.go 608: Cleaning up netns ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.094 [INFO][4885] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" iface="eth0" netns="" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.094 [INFO][4885] k8s.go 615: Releasing IP address(es) ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.094 [INFO][4885] utils.go 188: Calico CNI releasing IP address ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.114 [INFO][4893] ipam_plugin.go 417: Releasing address using handleID ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.114 [INFO][4893] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.114 [INFO][4893] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.119 [WARNING][4893] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.119 [INFO][4893] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.120 [INFO][4893] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:12.125098 containerd[1460]: 2024-09-04 17:34:12.122 [INFO][4885] k8s.go 621: Teardown processing complete. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.125757 containerd[1460]: time="2024-09-04T17:34:12.125131089Z" level=info msg="TearDown network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\" successfully" Sep 4 17:34:12.125757 containerd[1460]: time="2024-09-04T17:34:12.125156277Z" level=info msg="StopPodSandbox for \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\" returns successfully" Sep 4 17:34:12.125757 containerd[1460]: time="2024-09-04T17:34:12.125702622Z" level=info msg="RemovePodSandbox for \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\"" Sep 4 17:34:12.125757 containerd[1460]: time="2024-09-04T17:34:12.125734692Z" level=info msg="Forcibly stopping sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\"" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.157 [WARNING][4915] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"302bd635-ccd1-4c46-9368-eb8aaa152294", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d2099ea39e3d6751a563a71712ea226bc9a19b06826a6c2d0239a5f93c6050a", Pod:"coredns-7db6d8ff4d-2nnsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54f29dbde8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.158 [INFO][4915] k8s.go 608: Cleaning up netns ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.158 [INFO][4915] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" iface="eth0" netns="" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.158 [INFO][4915] k8s.go 615: Releasing IP address(es) ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.158 [INFO][4915] utils.go 188: Calico CNI releasing IP address ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.176 [INFO][4923] ipam_plugin.go 417: Releasing address using handleID ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.176 [INFO][4923] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.176 [INFO][4923] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.180 [WARNING][4923] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.180 [INFO][4923] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" HandleID="k8s-pod-network.6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Workload="localhost-k8s-coredns--7db6d8ff4d--2nnsb-eth0" Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.181 [INFO][4923] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:12.185783 containerd[1460]: 2024-09-04 17:34:12.183 [INFO][4915] k8s.go 621: Teardown processing complete. ContainerID="6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657" Sep 4 17:34:12.186234 containerd[1460]: time="2024-09-04T17:34:12.185805479Z" level=info msg="TearDown network for sandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\" successfully" Sep 4 17:34:12.189525 containerd[1460]: time="2024-09-04T17:34:12.189494324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:34:12.189576 containerd[1460]: time="2024-09-04T17:34:12.189538657Z" level=info msg="RemovePodSandbox \"6459d9d53917060c10d8fd9bc93172efd39b27d45a7cb291e06f24958773a657\" returns successfully" Sep 4 17:34:12.189976 containerd[1460]: time="2024-09-04T17:34:12.189950151Z" level=info msg="StopPodSandbox for \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\"" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.219 [WARNING][4945] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"72f5b034-8e53-4dda-a746-9a05ab61c7bd", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e", Pod:"coredns-7db6d8ff4d-hjz5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a22032bf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.219 [INFO][4945] k8s.go 608: Cleaning up netns ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.219 [INFO][4945] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" iface="eth0" netns="" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.219 [INFO][4945] k8s.go 615: Releasing IP address(es) ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.219 [INFO][4945] utils.go 188: Calico CNI releasing IP address ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.236 [INFO][4953] ipam_plugin.go 417: Releasing address using handleID ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.236 [INFO][4953] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.236 [INFO][4953] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.240 [WARNING][4953] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.240 [INFO][4953] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.241 [INFO][4953] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:12.246311 containerd[1460]: 2024-09-04 17:34:12.244 [INFO][4945] k8s.go 621: Teardown processing complete. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.246720 containerd[1460]: time="2024-09-04T17:34:12.246355586Z" level=info msg="TearDown network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\" successfully" Sep 4 17:34:12.246720 containerd[1460]: time="2024-09-04T17:34:12.246380814Z" level=info msg="StopPodSandbox for \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\" returns successfully" Sep 4 17:34:12.246875 containerd[1460]: time="2024-09-04T17:34:12.246806864Z" level=info msg="RemovePodSandbox for \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\"" Sep 4 17:34:12.246875 containerd[1460]: time="2024-09-04T17:34:12.246846338Z" level=info msg="Forcibly stopping sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\"" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.281 [WARNING][4976] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"72f5b034-8e53-4dda-a746-9a05ab61c7bd", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edbac813653950a81d73b27d27997c9749ed683d4109b68da545ca6d03a5a53e", Pod:"coredns-7db6d8ff4d-hjz5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69a22032bf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.281 [INFO][4976] k8s.go 608: Cleaning up netns ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.281 [INFO][4976] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" iface="eth0" netns="" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.281 [INFO][4976] k8s.go 615: Releasing IP address(es) ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.281 [INFO][4976] utils.go 188: Calico CNI releasing IP address ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.299 [INFO][4984] ipam_plugin.go 417: Releasing address using handleID ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.299 [INFO][4984] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.299 [INFO][4984] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.303 [WARNING][4984] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.303 [INFO][4984] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" HandleID="k8s-pod-network.2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Workload="localhost-k8s-coredns--7db6d8ff4d--hjz5r-eth0" Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.304 [INFO][4984] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:12.308644 containerd[1460]: 2024-09-04 17:34:12.306 [INFO][4976] k8s.go 621: Teardown processing complete. ContainerID="2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474" Sep 4 17:34:12.309135 containerd[1460]: time="2024-09-04T17:34:12.308631534Z" level=info msg="TearDown network for sandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\" successfully" Sep 4 17:34:12.312437 containerd[1460]: time="2024-09-04T17:34:12.312408584Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:34:12.312503 containerd[1460]: time="2024-09-04T17:34:12.312454610Z" level=info msg="RemovePodSandbox \"2afdae7010057f3642d851aeb8dc8de15dc6bfc0768d88a2c741debcc34fe474\" returns successfully" Sep 4 17:34:12.538004 systemd-networkd[1397]: calif932c5d4aef: Gained IPv6LL Sep 4 17:34:13.224806 containerd[1460]: time="2024-09-04T17:34:13.224750054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:13.225561 containerd[1460]: time="2024-09-04T17:34:13.225486362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:34:13.226554 containerd[1460]: time="2024-09-04T17:34:13.226522123Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:13.228884 containerd[1460]: time="2024-09-04T17:34:13.228849367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:13.229461 containerd[1460]: time="2024-09-04T17:34:13.229427948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 1.915325079s" Sep 4 17:34:13.229495 containerd[1460]: time="2024-09-04T17:34:13.229464396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:34:13.231434 containerd[1460]: time="2024-09-04T17:34:13.231407407Z" level=info msg="CreateContainer within sandbox \"995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:34:13.243035 containerd[1460]: time="2024-09-04T17:34:13.242949692Z" level=info msg="CreateContainer within sandbox \"995b01691ef85768de97aa4cb54645028a41d8101ce55589bc6f1015da96b377\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"531e3e6ceb03df9a9eeb1cc3a92ca86992d4b8ec5167bdd793db43f3693abe28\"" Sep 4 17:34:13.243447 containerd[1460]: time="2024-09-04T17:34:13.243416812Z" level=info msg="StartContainer for \"531e3e6ceb03df9a9eeb1cc3a92ca86992d4b8ec5167bdd793db43f3693abe28\"" Sep 4 17:34:13.270692 systemd[1]: run-containerd-runc-k8s.io-531e3e6ceb03df9a9eeb1cc3a92ca86992d4b8ec5167bdd793db43f3693abe28-runc.wdiZKt.mount: Deactivated successfully. Sep 4 17:34:13.283966 systemd[1]: Started cri-containerd-531e3e6ceb03df9a9eeb1cc3a92ca86992d4b8ec5167bdd793db43f3693abe28.scope - libcontainer container 531e3e6ceb03df9a9eeb1cc3a92ca86992d4b8ec5167bdd793db43f3693abe28. Sep 4 17:34:13.329157 containerd[1460]: time="2024-09-04T17:34:13.329118001Z" level=info msg="StartContainer for \"531e3e6ceb03df9a9eeb1cc3a92ca86992d4b8ec5167bdd793db43f3693abe28\" returns successfully" Sep 4 17:34:13.973323 kubelet[2545]: I0904 17:34:13.973248 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b55889d56-mczjr" podStartSLOduration=3.056790572 podStartE2EDuration="4.973230559s" podCreationTimestamp="2024-09-04 17:34:09 +0000 UTC" firstStartedPulling="2024-09-04 17:34:11.313662712 +0000 UTC m=+59.613553365" lastFinishedPulling="2024-09-04 17:34:13.230102699 +0000 UTC m=+61.529993352" observedRunningTime="2024-09-04 17:34:13.972563583 +0000 UTC m=+62.272454236" watchObservedRunningTime="2024-09-04 17:34:13.973230559 +0000 UTC m=+62.273121212" Sep 4 17:34:14.163022 systemd[1]: Started sshd@15-10.0.0.157:22-10.0.0.1:37520.service - OpenSSH per-connection server daemon (10.0.0.1:37520). Sep 4 17:34:14.204518 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 37520 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:14.206083 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:14.210139 systemd-logind[1444]: New session 16 of user core. Sep 4 17:34:14.218931 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:34:14.337241 sshd[5048]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:14.347129 systemd[1]: sshd@15-10.0.0.157:22-10.0.0.1:37520.service: Deactivated successfully. Sep 4 17:34:14.349335 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:34:14.350916 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:34:14.357637 systemd[1]: Started sshd@16-10.0.0.157:22-10.0.0.1:37532.service - OpenSSH per-connection server daemon (10.0.0.1:37532). Sep 4 17:34:14.358614 systemd-logind[1444]: Removed session 16. Sep 4 17:34:14.387486 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 37532 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:14.388743 sshd[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:14.392441 systemd-logind[1444]: New session 17 of user core. Sep 4 17:34:14.398931 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:34:14.583348 sshd[5083]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:14.595535 systemd[1]: sshd@16-10.0.0.157:22-10.0.0.1:37532.service: Deactivated successfully. Sep 4 17:34:14.598159 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:34:14.600529 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:34:14.608280 systemd[1]: Started sshd@17-10.0.0.157:22-10.0.0.1:37534.service - OpenSSH per-connection server daemon (10.0.0.1:37534). Sep 4 17:34:14.609350 systemd-logind[1444]: Removed session 17. Sep 4 17:34:14.640337 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 37534 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:14.641896 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:14.646204 systemd-logind[1444]: New session 18 of user core. Sep 4 17:34:14.651093 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:34:14.968268 kubelet[2545]: I0904 17:34:14.968140 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:34:15.975227 sshd[5100]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:15.983838 systemd[1]: sshd@17-10.0.0.157:22-10.0.0.1:37534.service: Deactivated successfully. Sep 4 17:34:15.985544 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:34:15.987086 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:34:15.993290 systemd[1]: Started sshd@18-10.0.0.157:22-10.0.0.1:49470.service - OpenSSH per-connection server daemon (10.0.0.1:49470). Sep 4 17:34:15.994885 systemd-logind[1444]: Removed session 18. Sep 4 17:34:16.025959 sshd[5121]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:16.027353 sshd[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:16.031358 systemd-logind[1444]: New session 19 of user core. Sep 4 17:34:16.037117 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:34:16.241075 sshd[5121]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:16.249077 systemd[1]: sshd@18-10.0.0.157:22-10.0.0.1:49470.service: Deactivated successfully. Sep 4 17:34:16.251140 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:34:16.252923 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:34:16.262454 systemd[1]: Started sshd@19-10.0.0.157:22-10.0.0.1:49478.service - OpenSSH per-connection server daemon (10.0.0.1:49478). Sep 4 17:34:16.263315 systemd-logind[1444]: Removed session 19. Sep 4 17:34:16.292671 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 49478 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:16.294296 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:16.299221 systemd-logind[1444]: New session 20 of user core. Sep 4 17:34:16.309036 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:34:16.422054 sshd[5135]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:16.427086 systemd[1]: sshd@19-10.0.0.157:22-10.0.0.1:49478.service: Deactivated successfully. Sep 4 17:34:16.429269 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:34:16.430019 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:34:16.431034 systemd-logind[1444]: Removed session 20. Sep 4 17:34:21.434998 systemd[1]: Started sshd@20-10.0.0.157:22-10.0.0.1:49486.service - OpenSSH per-connection server daemon (10.0.0.1:49486). Sep 4 17:34:21.478599 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 49486 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:21.479904 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:21.483634 systemd-logind[1444]: New session 21 of user core. Sep 4 17:34:21.492949 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:34:21.591497 sshd[5157]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:21.595059 systemd[1]: sshd@20-10.0.0.157:22-10.0.0.1:49486.service: Deactivated successfully. Sep 4 17:34:21.597111 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:34:21.597675 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:34:21.598458 systemd-logind[1444]: Removed session 21. Sep 4 17:34:26.602689 systemd[1]: Started sshd@21-10.0.0.157:22-10.0.0.1:42162.service - OpenSSH per-connection server daemon (10.0.0.1:42162). Sep 4 17:34:26.638210 sshd[5221]: Accepted publickey for core from 10.0.0.1 port 42162 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:26.639585 sshd[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:26.643106 systemd-logind[1444]: New session 22 of user core. Sep 4 17:34:26.654945 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:34:26.753837 sshd[5221]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:26.757261 systemd[1]: sshd@21-10.0.0.157:22-10.0.0.1:42162.service: Deactivated successfully. Sep 4 17:34:26.759068 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:34:26.759587 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:34:26.760399 systemd-logind[1444]: Removed session 22. Sep 4 17:34:30.772752 kubelet[2545]: E0904 17:34:30.772698 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:31.780600 systemd[1]: Started sshd@22-10.0.0.157:22-10.0.0.1:42178.service - OpenSSH per-connection server daemon (10.0.0.1:42178). Sep 4 17:34:31.847540 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 42178 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:31.850085 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:31.857656 systemd-logind[1444]: New session 23 of user core. Sep 4 17:34:31.868141 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:34:32.027167 sshd[5239]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:32.032363 systemd[1]: sshd@22-10.0.0.157:22-10.0.0.1:42178.service: Deactivated successfully. Sep 4 17:34:32.036892 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:34:32.037980 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:34:32.039222 systemd-logind[1444]: Removed session 23. Sep 4 17:34:32.772607 kubelet[2545]: E0904 17:34:32.772552 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:33.772856 kubelet[2545]: E0904 17:34:33.772152 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:33.772856 kubelet[2545]: E0904 17:34:33.772394 2545 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:37.036351 systemd[1]: Started sshd@23-10.0.0.157:22-10.0.0.1:41986.service - OpenSSH per-connection server daemon (10.0.0.1:41986). Sep 4 17:34:37.071490 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 41986 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:34:37.073004 sshd[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:37.077153 systemd-logind[1444]: New session 24 of user core. Sep 4 17:34:37.087995 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:34:37.200146 sshd[5265]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:37.204933 systemd[1]: sshd@23-10.0.0.157:22-10.0.0.1:41986.service: Deactivated successfully. Sep 4 17:34:37.207070 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:34:37.207889 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:34:37.208860 systemd-logind[1444]: Removed session 24. Sep 4 17:34:38.725392 kubelet[2545]: I0904 17:34:38.725342 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"