Sep 4 17:19:11.933947 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 Sep 4 17:19:11.933974 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:19:11.933988 kernel: BIOS-provided physical RAM map: Sep 4 17:19:11.933997 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 4 17:19:11.934005 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 4 17:19:11.934013 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 4 17:19:11.934023 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 4 17:19:11.934032 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 4 17:19:11.934040 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 4 17:19:11.934049 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 4 17:19:11.934060 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 4 17:19:11.934069 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 4 17:19:11.934077 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 4 17:19:11.934086 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 4 17:19:11.934097 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 4 17:19:11.934109 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 4 17:19:11.934119 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 4 17:19:11.934128 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 4 17:19:11.934137 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 4 17:19:11.934146 kernel: NX (Execute Disable) protection: active Sep 4 17:19:11.934156 kernel: APIC: Static calls initialized Sep 4 17:19:11.934165 kernel: efi: EFI v2.7 by EDK II Sep 4 17:19:11.934174 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b4ef018 Sep 4 17:19:11.934183 kernel: SMBIOS 2.8 present. Sep 4 17:19:11.934193 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Sep 4 17:19:11.934202 kernel: Hypervisor detected: KVM Sep 4 17:19:11.934211 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 17:19:11.934223 kernel: kvm-clock: using sched offset of 4299186879 cycles Sep 4 17:19:11.934232 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 17:19:11.934242 kernel: tsc: Detected 2794.746 MHz processor Sep 4 17:19:11.934252 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 17:19:11.934262 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 17:19:11.934271 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 4 17:19:11.934281 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 4 17:19:11.934290 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 17:19:11.934300 kernel: Using GB pages for direct mapping Sep 4 17:19:11.934312 kernel: Secure boot disabled Sep 4 17:19:11.934321 kernel: ACPI: Early table checksum verification disabled Sep 4 17:19:11.934331 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 4 17:19:11.934341 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:19:11.934355 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:19:11.934365 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:19:11.934378 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 4 17:19:11.934388 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:19:11.934398 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:19:11.934408 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:19:11.934419 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 4 17:19:11.934429 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Sep 4 17:19:11.934439 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Sep 4 17:19:11.934449 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 4 17:19:11.934462 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Sep 4 17:19:11.934473 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Sep 4 17:19:11.934483 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Sep 4 17:19:11.934492 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Sep 4 17:19:11.934502 kernel: No NUMA configuration found Sep 4 17:19:11.934513 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 4 17:19:11.934523 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 4 17:19:11.934533 kernel: Zone ranges: Sep 4 17:19:11.934543 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 17:19:11.934558 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 4 17:19:11.934568 kernel: Normal empty Sep 4 17:19:11.934581 kernel: Movable zone start for each node Sep 4 17:19:11.934592 kernel: Early memory node ranges Sep 4 17:19:11.934604 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 4 17:19:11.934614 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 4 17:19:11.934624 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 4 17:19:11.934634 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 4 17:19:11.934643 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 4 17:19:11.934654 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 4 17:19:11.934667 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 4 17:19:11.934677 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:19:11.934687 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 4 17:19:11.934697 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 4 17:19:11.934707 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 17:19:11.934717 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 4 17:19:11.934727 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 4 17:19:11.934737 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 4 17:19:11.934747 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 4 17:19:11.934759 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 17:19:11.934769 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 17:19:11.934779 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 17:19:11.934789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 17:19:11.934799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 17:19:11.934829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 17:19:11.934840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 17:19:11.934850 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 17:19:11.934860 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 17:19:11.934873 kernel: TSC deadline timer available Sep 4 17:19:11.934892 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 17:19:11.934902 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 17:19:11.934912 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 17:19:11.934922 kernel: kvm-guest: setup PV sched yield Sep 4 17:19:11.934932 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Sep 4 17:19:11.934942 kernel: Booting paravirtualized kernel on KVM Sep 4 17:19:11.934952 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 17:19:11.934963 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 17:19:11.934975 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Sep 4 17:19:11.934985 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Sep 4 17:19:11.934995 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 17:19:11.935004 kernel: kvm-guest: PV spinlocks enabled Sep 4 17:19:11.935014 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 17:19:11.935026 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:19:11.935037 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:19:11.935046 kernel: random: crng init done Sep 4 17:19:11.935056 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:19:11.935069 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:19:11.935079 kernel: Fallback order for Node 0: 0 Sep 4 17:19:11.935089 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 4 17:19:11.935099 kernel: Policy zone: DMA32 Sep 4 17:19:11.935109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:19:11.935119 kernel: Memory: 2388164K/2567000K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49336K init, 2008K bss, 178576K reserved, 0K cma-reserved) Sep 4 17:19:11.935129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:19:11.935139 kernel: ftrace: allocating 37670 entries in 148 pages Sep 4 17:19:11.935151 kernel: ftrace: allocated 148 pages with 3 groups Sep 4 17:19:11.935161 kernel: Dynamic Preempt: voluntary Sep 4 17:19:11.935171 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:19:11.935182 kernel: rcu: RCU event tracing is enabled. Sep 4 17:19:11.935192 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:19:11.935212 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:19:11.935224 kernel: Rude variant of Tasks RCU enabled. Sep 4 17:19:11.935235 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:19:11.935245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:19:11.935256 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:19:11.935266 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 17:19:11.935277 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:19:11.935287 kernel: Console: colour dummy device 80x25 Sep 4 17:19:11.935300 kernel: printk: console [ttyS0] enabled Sep 4 17:19:11.935311 kernel: ACPI: Core revision 20230628 Sep 4 17:19:11.935321 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 17:19:11.935332 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 17:19:11.935345 kernel: x2apic enabled Sep 4 17:19:11.935355 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 17:19:11.935366 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 17:19:11.935376 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 17:19:11.935387 kernel: kvm-guest: setup PV IPIs Sep 4 17:19:11.935397 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 17:19:11.935408 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 17:19:11.935419 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Sep 4 17:19:11.935429 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 17:19:11.935442 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 17:19:11.935452 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 17:19:11.935463 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 17:19:11.935476 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 17:19:11.935487 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 4 17:19:11.935497 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 4 17:19:11.935507 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 17:19:11.935518 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 17:19:11.935528 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 17:19:11.935541 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 17:19:11.935552 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 17:19:11.935563 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 17:19:11.935573 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 17:19:11.935584 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 17:19:11.935594 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 17:19:11.935604 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 17:19:11.935615 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 17:19:11.935625 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 17:19:11.935638 kernel: Freeing SMP alternatives memory: 32K Sep 4 17:19:11.935648 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:19:11.935659 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:19:11.935669 kernel: SELinux: Initializing. Sep 4 17:19:11.935679 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:19:11.935690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:19:11.935701 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 17:19:11.935711 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:19:11.935724 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:19:11.935734 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:19:11.935745 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 17:19:11.935755 kernel: ... version: 0 Sep 4 17:19:11.935765 kernel: ... bit width: 48 Sep 4 17:19:11.935776 kernel: ... generic registers: 6 Sep 4 17:19:11.935786 kernel: ... value mask: 0000ffffffffffff Sep 4 17:19:11.935796 kernel: ... max period: 00007fffffffffff Sep 4 17:19:11.935806 kernel: ... fixed-purpose events: 0 Sep 4 17:19:11.935841 kernel: ... event mask: 000000000000003f Sep 4 17:19:11.935856 kernel: signal: max sigframe size: 1776 Sep 4 17:19:11.935866 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:19:11.935887 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:19:11.935898 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:19:11.935908 kernel: smpboot: x86: Booting SMP configuration: Sep 4 17:19:11.935918 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 17:19:11.935929 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:19:11.935939 kernel: smpboot: Max logical packages: 1 Sep 4 17:19:11.935949 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Sep 4 17:19:11.935963 kernel: devtmpfs: initialized Sep 4 17:19:11.935973 kernel: x86/mm: Memory block size: 128MB Sep 4 17:19:11.935984 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 4 17:19:11.935994 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 4 17:19:11.936005 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 4 17:19:11.936015 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 4 17:19:11.936026 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 4 17:19:11.936036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:19:11.936047 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:19:11.936060 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:19:11.936071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:19:11.936081 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:19:11.936092 kernel: audit: type=2000 audit(1725470350.859:1): state=initialized audit_enabled=0 res=1 Sep 4 17:19:11.936102 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:19:11.936113 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 17:19:11.936123 kernel: cpuidle: using governor menu Sep 4 17:19:11.936134 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:19:11.936144 kernel: dca service started, version 1.12.1 Sep 4 17:19:11.936158 kernel: PCI: Using configuration type 1 for base access Sep 4 17:19:11.936168 kernel: PCI: Using configuration type 1 for extended access Sep 4 17:19:11.936179 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 17:19:11.936189 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:19:11.936200 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:19:11.936210 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:19:11.936220 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:19:11.936231 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:19:11.936241 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:19:11.936254 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:19:11.936264 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:19:11.936275 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:19:11.936285 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 17:19:11.936296 kernel: ACPI: Interpreter enabled Sep 4 17:19:11.936306 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 17:19:11.936316 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 17:19:11.936327 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 17:19:11.936337 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 17:19:11.936351 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 4 17:19:11.936361 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:19:11.936571 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:19:11.936591 kernel: acpiphp: Slot [3] registered Sep 4 17:19:11.936603 kernel: acpiphp: Slot [4] registered Sep 4 17:19:11.936614 kernel: acpiphp: Slot [5] registered Sep 4 17:19:11.936625 kernel: acpiphp: Slot [6] registered Sep 4 17:19:11.936635 kernel: acpiphp: Slot [7] registered Sep 4 17:19:11.936649 kernel: acpiphp: Slot [8] registered Sep 4 17:19:11.936659 kernel: acpiphp: Slot [9] registered Sep 4 17:19:11.936670 kernel: acpiphp: Slot [10] registered Sep 4 17:19:11.936680 kernel: acpiphp: Slot [11] registered Sep 4 17:19:11.936690 kernel: acpiphp: Slot [12] registered Sep 4 17:19:11.936701 kernel: acpiphp: Slot [13] registered Sep 4 17:19:11.936711 kernel: acpiphp: Slot [14] registered Sep 4 17:19:11.936721 kernel: acpiphp: Slot [15] registered Sep 4 17:19:11.936732 kernel: acpiphp: Slot [16] registered Sep 4 17:19:11.936745 kernel: acpiphp: Slot [17] registered Sep 4 17:19:11.936755 kernel: acpiphp: Slot [18] registered Sep 4 17:19:11.936765 kernel: acpiphp: Slot [19] registered Sep 4 17:19:11.936776 kernel: acpiphp: Slot [20] registered Sep 4 17:19:11.936786 kernel: acpiphp: Slot [21] registered Sep 4 17:19:11.936796 kernel: acpiphp: Slot [22] registered Sep 4 17:19:11.936807 kernel: acpiphp: Slot [23] registered Sep 4 17:19:11.936841 kernel: acpiphp: Slot [24] registered Sep 4 17:19:11.936851 kernel: acpiphp: Slot [25] registered Sep 4 17:19:11.936862 kernel: acpiphp: Slot [26] registered Sep 4 17:19:11.936876 kernel: acpiphp: Slot [27] registered Sep 4 17:19:11.936896 kernel: acpiphp: Slot [28] registered Sep 4 17:19:11.936906 kernel: acpiphp: Slot [29] registered Sep 4 17:19:11.936916 kernel: acpiphp: Slot [30] registered Sep 4 17:19:11.936927 kernel: acpiphp: Slot [31] registered Sep 4 17:19:11.936937 kernel: PCI host bridge to bus 0000:00 Sep 4 17:19:11.937103 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 17:19:11.937252 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 17:19:11.937403 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 17:19:11.937547 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Sep 4 17:19:11.937686 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Sep 4 17:19:11.937842 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:19:11.938029 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 4 17:19:11.938196 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 4 17:19:11.938374 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 4 17:19:11.938536 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Sep 4 17:19:11.938698 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 4 17:19:11.938853 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 4 17:19:11.939008 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 4 17:19:11.939149 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 4 17:19:11.939296 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 4 17:19:11.939428 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 4 17:19:11.939559 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Sep 4 17:19:11.939705 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Sep 4 17:19:11.939893 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 4 17:19:11.940057 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Sep 4 17:19:11.940210 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 4 17:19:11.940365 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Sep 4 17:19:11.940530 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 17:19:11.940706 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:19:11.940900 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Sep 4 17:19:11.941059 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 4 17:19:11.941216 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 4 17:19:11.941380 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 4 17:19:11.941544 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 17:19:11.941697 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 4 17:19:11.941869 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 4 17:19:11.942046 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Sep 4 17:19:11.942221 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 4 17:19:11.942381 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Sep 4 17:19:11.942519 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 4 17:19:11.942663 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 4 17:19:11.942679 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 17:19:11.942687 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 17:19:11.942694 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 17:19:11.942702 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 17:19:11.942710 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 4 17:19:11.942717 kernel: iommu: Default domain type: Translated Sep 4 17:19:11.942725 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 17:19:11.942733 kernel: efivars: Registered efivars operations Sep 4 17:19:11.942740 kernel: PCI: Using ACPI for IRQ routing Sep 4 17:19:11.942750 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 17:19:11.942758 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 4 17:19:11.942766 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 4 17:19:11.942773 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 4 17:19:11.942780 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 4 17:19:11.942992 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 4 17:19:11.943131 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 4 17:19:11.943269 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 17:19:11.943284 kernel: vgaarb: loaded Sep 4 17:19:11.943292 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 17:19:11.943300 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 17:19:11.943307 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 17:19:11.943315 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:19:11.943323 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:19:11.943331 kernel: pnp: PnP ACPI init Sep 4 17:19:11.943463 kernel: pnp 00:02: [dma 2] Sep 4 17:19:11.943480 kernel: pnp: PnP ACPI: found 6 devices Sep 4 17:19:11.943496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 17:19:11.943507 kernel: NET: Registered PF_INET protocol family Sep 4 17:19:11.943519 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:19:11.943530 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:19:11.943541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:19:11.943552 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:19:11.943563 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:19:11.943574 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:19:11.943589 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:19:11.943600 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:19:11.943611 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:19:11.943622 kernel: NET: Registered PF_XDP protocol family Sep 4 17:19:11.943786 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 4 17:19:11.943984 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 4 17:19:11.944128 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 17:19:11.944270 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 17:19:11.944416 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 17:19:11.944558 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Sep 4 17:19:11.944696 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Sep 4 17:19:11.944916 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 4 17:19:11.945073 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 4 17:19:11.945101 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:19:11.945112 kernel: Initialise system trusted keyrings Sep 4 17:19:11.945123 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:19:11.945138 kernel: Key type asymmetric registered Sep 4 17:19:11.945149 kernel: Asymmetric key parser 'x509' registered Sep 4 17:19:11.945159 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 17:19:11.945170 kernel: io scheduler mq-deadline registered Sep 4 17:19:11.945181 kernel: io scheduler kyber registered Sep 4 17:19:11.945191 kernel: io scheduler bfq registered Sep 4 17:19:11.945202 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 17:19:11.945213 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 4 17:19:11.945224 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 4 17:19:11.945235 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 4 17:19:11.945266 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:19:11.945286 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 17:19:11.945307 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 17:19:11.945349 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 17:19:11.945363 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 17:19:11.945533 kernel: rtc_cmos 00:05: RTC can wake from S4 Sep 4 17:19:11.945671 kernel: rtc_cmos 00:05: registered as rtc0 Sep 4 17:19:11.945806 kernel: rtc_cmos 00:05: setting system clock to 2024-09-04T17:19:11 UTC (1725470351) Sep 4 17:19:11.945990 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 17:19:11.946007 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 17:19:11.946019 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 4 17:19:11.946031 kernel: efifb: probing for efifb Sep 4 17:19:11.946042 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 4 17:19:11.946053 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 4 17:19:11.946064 kernel: efifb: scrolling: redraw Sep 4 17:19:11.946075 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 4 17:19:11.946091 kernel: Console: switching to colour frame buffer device 100x37 Sep 4 17:19:11.946103 kernel: fb0: EFI VGA frame buffer device Sep 4 17:19:11.946114 kernel: pstore: Using crash dump compression: deflate Sep 4 17:19:11.946125 kernel: pstore: Registered efi_pstore as persistent store backend Sep 4 17:19:11.946137 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:19:11.946148 kernel: Segment Routing with IPv6 Sep 4 17:19:11.946159 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:19:11.946171 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:19:11.946182 kernel: Key type dns_resolver registered Sep 4 17:19:11.946194 kernel: IPI shorthand broadcast: enabled Sep 4 17:19:11.946209 kernel: sched_clock: Marking stable (737005113, 126015568)->(916088425, -53067744) Sep 4 17:19:11.946225 kernel: registered taskstats version 1 Sep 4 17:19:11.946237 kernel: Loading compiled-in X.509 certificates Sep 4 17:19:11.946249 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: a53bb4e7e3319f75620f709d8a6c7aef0adb3b02' Sep 4 17:19:11.946260 kernel: Key type .fscrypt registered Sep 4 17:19:11.946274 kernel: Key type fscrypt-provisioning registered Sep 4 17:19:11.946285 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:19:11.946297 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:19:11.946308 kernel: ima: No architecture policies found Sep 4 17:19:11.946320 kernel: clk: Disabling unused clocks Sep 4 17:19:11.946331 kernel: Freeing unused kernel image (initmem) memory: 49336K Sep 4 17:19:11.946343 kernel: Write protecting the kernel read-only data: 36864k Sep 4 17:19:11.946355 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Sep 4 17:19:11.946366 kernel: Run /init as init process Sep 4 17:19:11.946381 kernel: with arguments: Sep 4 17:19:11.946392 kernel: /init Sep 4 17:19:11.946401 kernel: with environment: Sep 4 17:19:11.946409 kernel: HOME=/ Sep 4 17:19:11.946418 kernel: TERM=linux Sep 4 17:19:11.946425 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:19:11.946436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:19:11.946449 systemd[1]: Detected virtualization kvm. Sep 4 17:19:11.946458 systemd[1]: Detected architecture x86-64. Sep 4 17:19:11.946466 systemd[1]: Running in initrd. Sep 4 17:19:11.946475 systemd[1]: No hostname configured, using default hostname. Sep 4 17:19:11.946493 systemd[1]: Hostname set to . Sep 4 17:19:11.946506 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:19:11.946518 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:19:11.946530 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:19:11.946544 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:19:11.946553 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:19:11.946562 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:19:11.946571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:19:11.946580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:19:11.946590 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:19:11.946599 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:19:11.946609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:19:11.946618 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:19:11.946626 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:19:11.946634 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:19:11.946643 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:19:11.946651 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:19:11.946680 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:19:11.946717 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:19:11.946731 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:19:11.946768 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:19:11.946781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:19:11.946793 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:19:11.946805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:19:11.946831 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:19:11.946844 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:19:11.946856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:19:11.946868 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:19:11.946893 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:19:11.946905 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:19:11.946917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:19:11.946929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:19:11.946941 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:19:11.946953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:19:11.946965 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:19:11.946982 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:19:11.947018 systemd-journald[192]: Collecting audit messages is disabled. Sep 4 17:19:11.947049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:19:11.947062 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:19:11.947075 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:19:11.947086 systemd-journald[192]: Journal started Sep 4 17:19:11.947111 systemd-journald[192]: Runtime Journal (/run/log/journal/c7a9e19e3b2644f1836141a47b28b07b) is 6.0M, max 48.3M, 42.3M free. Sep 4 17:19:11.950826 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:19:11.950857 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:19:11.953067 systemd-modules-load[194]: Inserted module 'overlay' Sep 4 17:19:11.958976 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:19:11.961169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:19:11.963251 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:19:11.966035 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:19:11.975672 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:19:11.980801 dracut-cmdline[221]: dracut-dracut-053 Sep 4 17:19:11.983590 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6662bd39fec77da4c9a5c59d2cba257325976309ed96904c83697df1825085bf Sep 4 17:19:12.006845 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:19:12.009510 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 4 17:19:12.010570 kernel: Bridge firewalling registered Sep 4 17:19:12.011947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:19:12.016970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:19:12.027792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:19:12.036971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:19:12.067734 systemd-resolved[286]: Positive Trust Anchors: Sep 4 17:19:12.067750 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:19:12.067780 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:19:12.070320 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 4 17:19:12.071354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:19:12.077387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:19:12.083838 kernel: SCSI subsystem initialized Sep 4 17:19:12.095840 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:19:12.108834 kernel: iscsi: registered transport (tcp) Sep 4 17:19:12.133855 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:19:12.133891 kernel: QLogic iSCSI HBA Driver Sep 4 17:19:12.180029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:19:12.188046 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:19:12.214842 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:19:12.214912 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:19:12.216430 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:19:12.263856 kernel: raid6: avx2x4 gen() 25486 MB/s Sep 4 17:19:12.280846 kernel: raid6: avx2x2 gen() 29852 MB/s Sep 4 17:19:12.297965 kernel: raid6: avx2x1 gen() 23929 MB/s Sep 4 17:19:12.298006 kernel: raid6: using algorithm avx2x2 gen() 29852 MB/s Sep 4 17:19:12.315957 kernel: raid6: .... xor() 18953 MB/s, rmw enabled Sep 4 17:19:12.315995 kernel: raid6: using avx2x2 recovery algorithm Sep 4 17:19:12.341854 kernel: xor: automatically using best checksumming function avx Sep 4 17:19:12.521848 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:19:12.533328 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:19:12.545085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:19:12.557691 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 4 17:19:12.562339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:19:12.574065 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:19:12.588447 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 4 17:19:12.621902 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:19:12.630017 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:19:12.707110 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:19:12.715012 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:19:12.730629 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:19:12.732769 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:19:12.736255 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:19:12.738979 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:19:12.743887 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 17:19:12.746335 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:19:12.750681 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:19:12.752853 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 17:19:12.752886 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:19:12.754824 kernel: GPT:9289727 != 19775487 Sep 4 17:19:12.754844 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:19:12.754854 kernel: GPT:9289727 != 19775487 Sep 4 17:19:12.754906 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:19:12.756284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:19:12.778240 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 17:19:12.778278 kernel: AES CTR mode by8 optimization enabled Sep 4 17:19:12.778312 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:19:12.780758 kernel: libata version 3.00 loaded. Sep 4 17:19:12.783937 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 4 17:19:12.787059 kernel: scsi host0: ata_piix Sep 4 17:19:12.787297 kernel: scsi host1: ata_piix Sep 4 17:19:12.787590 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Sep 4 17:19:12.787614 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Sep 4 17:19:12.789720 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:19:12.790736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:19:12.792904 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:19:12.795863 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:19:12.795929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:19:12.797464 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:19:12.811089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:19:12.815200 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Sep 4 17:19:12.815229 kernel: BTRFS: device fsid d110be6f-93a3-451a-b365-11b5d04e0602 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (460) Sep 4 17:19:12.828458 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:19:12.830073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:19:12.837240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:19:12.850976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:19:12.856110 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:19:12.857384 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:19:12.880018 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:19:12.882061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:19:12.888320 disk-uuid[543]: Primary Header is updated. Sep 4 17:19:12.888320 disk-uuid[543]: Secondary Entries is updated. Sep 4 17:19:12.888320 disk-uuid[543]: Secondary Header is updated. Sep 4 17:19:12.892838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:19:12.897836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:19:12.909298 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:19:12.943397 kernel: ata2: found unknown device (class 0) Sep 4 17:19:12.945863 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 17:19:12.948948 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 17:19:13.003906 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 17:19:13.004160 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:19:13.016839 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Sep 4 17:19:13.907865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:19:13.908550 disk-uuid[544]: The operation has completed successfully. Sep 4 17:19:13.937306 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:19:13.937430 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:19:13.968983 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:19:13.972605 sh[580]: Success Sep 4 17:19:13.985859 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 17:19:14.017144 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:19:14.031378 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:19:14.036040 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:19:14.045125 kernel: BTRFS info (device dm-0): first mount of filesystem d110be6f-93a3-451a-b365-11b5d04e0602 Sep 4 17:19:14.045173 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:19:14.045185 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:19:14.046150 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:19:14.046899 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:19:14.051500 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:19:14.053133 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:19:14.064960 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:19:14.066706 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:19:14.075992 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:19:14.076046 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:19:14.076057 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:19:14.079890 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:19:14.088433 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:19:14.090959 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:19:14.175505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:19:14.235027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:19:14.255875 systemd-networkd[758]: lo: Link UP Sep 4 17:19:14.255888 systemd-networkd[758]: lo: Gained carrier Sep 4 17:19:14.440529 systemd-networkd[758]: Enumeration completed Sep 4 17:19:14.440697 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:19:14.441296 systemd[1]: Reached target network.target - Network. Sep 4 17:19:14.445271 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:19:14.445280 systemd-networkd[758]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:19:14.449671 systemd-networkd[758]: eth0: Link UP Sep 4 17:19:14.449680 systemd-networkd[758]: eth0: Gained carrier Sep 4 17:19:14.449687 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:19:14.459723 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:19:14.470889 systemd-networkd[758]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:19:14.471072 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:19:14.525244 ignition[762]: Ignition 2.18.0 Sep 4 17:19:14.525257 ignition[762]: Stage: fetch-offline Sep 4 17:19:14.525307 ignition[762]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:19:14.525319 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:19:14.525549 ignition[762]: parsed url from cmdline: "" Sep 4 17:19:14.525553 ignition[762]: no config URL provided Sep 4 17:19:14.525559 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:19:14.525575 ignition[762]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:19:14.525604 ignition[762]: op(1): [started] loading QEMU firmware config module Sep 4 17:19:14.525610 ignition[762]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:19:14.536498 ignition[762]: op(1): [finished] loading QEMU firmware config module Sep 4 17:19:14.576106 ignition[762]: parsing config with SHA512: bdd5bab9cc9578fc3dae241ad9541dd164defde5f89bd28a64db2e43372bbe81c079c84134c4a405b53101a096a92847fe02a58e043d1a330fdd23df102682d9 Sep 4 17:19:14.579951 unknown[762]: fetched base config from "system" Sep 4 17:19:14.579966 unknown[762]: fetched user config from "qemu" Sep 4 17:19:14.580898 ignition[762]: fetch-offline: fetch-offline passed Sep 4 17:19:14.580995 ignition[762]: Ignition finished successfully Sep 4 17:19:14.583323 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:19:14.584953 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:19:14.589027 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:19:14.604651 ignition[775]: Ignition 2.18.0 Sep 4 17:19:14.604664 ignition[775]: Stage: kargs Sep 4 17:19:14.604914 ignition[775]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:19:14.604928 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:19:14.606078 ignition[775]: kargs: kargs passed Sep 4 17:19:14.606137 ignition[775]: Ignition finished successfully Sep 4 17:19:14.609617 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:19:14.615963 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:19:14.630100 ignition[784]: Ignition 2.18.0 Sep 4 17:19:14.630111 ignition[784]: Stage: disks Sep 4 17:19:14.630271 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:19:14.630283 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:19:14.631096 ignition[784]: disks: disks passed Sep 4 17:19:14.633787 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:19:14.631144 ignition[784]: Ignition finished successfully Sep 4 17:19:14.635101 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:19:14.636685 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:19:14.638923 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:19:14.640004 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:19:14.641068 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:19:14.656004 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:19:14.668764 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:19:14.675861 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:19:14.688985 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:19:14.795860 kernel: EXT4-fs (vda9): mounted filesystem 84a5cefa-c3c7-47d7-9305-7e6877f73628 r/w with ordered data mode. Quota mode: none. Sep 4 17:19:14.796743 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:19:14.798072 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:19:14.811046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:19:14.813248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:19:14.814754 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:19:14.820973 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Sep 4 17:19:14.814829 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:19:14.827164 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:19:14.827187 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:19:14.827199 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:19:14.814864 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:19:14.823354 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:19:14.828109 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:19:14.833062 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:19:14.835319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:19:14.870336 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:19:14.874259 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:19:14.877871 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:19:14.881412 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:19:14.957292 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:19:14.971053 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:19:14.972977 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:19:14.980943 kernel: BTRFS info (device vda6): last unmount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:19:15.003416 ignition[917]: INFO : Ignition 2.18.0 Sep 4 17:19:15.003416 ignition[917]: INFO : Stage: mount Sep 4 17:19:15.006394 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:19:15.006394 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:19:15.004084 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:19:15.012056 ignition[917]: INFO : mount: mount passed Sep 4 17:19:15.012978 ignition[917]: INFO : Ignition finished successfully Sep 4 17:19:15.016444 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:19:15.023196 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:19:15.044270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:19:15.058962 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:19:15.066828 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Sep 4 17:19:15.066881 kernel: BTRFS info (device vda6): first mount of filesystem 50e7422b-f0c7-4536-902a-3ab4c864240b Sep 4 17:19:15.066897 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 17:19:15.068332 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:19:15.070853 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:19:15.072574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:19:15.104662 ignition[947]: INFO : Ignition 2.18.0 Sep 4 17:19:15.104662 ignition[947]: INFO : Stage: files Sep 4 17:19:15.106615 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:19:15.106615 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:19:15.106615 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:19:15.110661 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:19:15.110661 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:19:15.116057 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:19:15.117791 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:19:15.119654 unknown[947]: wrote ssh authorized keys file for user: core Sep 4 17:19:15.121012 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:19:15.123662 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:19:15.125892 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 4 17:19:15.222899 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:19:15.359493 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 4 17:19:15.359493 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:19:15.363610 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:19:15.365575 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:19:15.367951 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:19:15.369831 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:19:15.371856 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:19:15.373822 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:19:15.375856 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:19:15.378063 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:19:15.379967 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:19:15.382001 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:19:15.384503 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:19:15.386914 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:19:15.389070 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Sep 4 17:19:15.732100 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:19:16.113972 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Sep 4 17:19:16.113972 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:19:16.117872 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:19:16.140098 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:19:16.144752 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:19:16.146582 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:19:16.146582 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:19:16.146582 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:19:16.146582 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:19:16.146582 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:19:16.146582 ignition[947]: INFO : files: files passed Sep 4 17:19:16.146582 ignition[947]: INFO : Ignition finished successfully Sep 4 17:19:16.147983 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:19:16.162147 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:19:16.166858 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:19:16.170043 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:19:16.171317 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:19:16.178041 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:19:16.182803 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:19:16.182803 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:19:16.186105 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:19:16.188986 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:19:16.191850 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:19:16.205038 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:19:16.232042 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:19:16.233153 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:19:16.236209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:19:16.238310 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:19:16.240499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:19:16.253104 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:19:16.266904 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:19:16.276014 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:19:16.290286 systemd[1]: Stopped target network.target - Network. Sep 4 17:19:16.290694 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:19:16.291267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:19:16.291615 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:19:16.292131 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:19:16.292254 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:19:16.298849 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:19:16.299311 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:19:16.299637 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:19:16.300154 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:19:16.300487 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:19:16.300847 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:19:16.301344 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:19:16.301703 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:19:16.302210 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:19:16.302536 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:19:16.303037 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:19:16.303148 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:19:16.320862 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:19:16.321344 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:19:16.321644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:19:16.321892 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:19:16.327427 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:19:16.327547 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:19:16.327741 systemd-networkd[758]: eth0: Gained IPv6LL Sep 4 17:19:16.334863 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:19:16.335987 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:19:16.338385 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:19:16.340179 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:19:16.341272 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:19:16.344073 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:19:16.345927 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:19:16.347824 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:19:16.348701 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:19:16.350731 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:19:16.351667 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:19:16.353895 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:19:16.355136 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:19:16.357669 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:19:16.358674 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:19:16.374996 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:19:16.377752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:19:16.379830 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:19:16.382084 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:19:16.384514 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:19:16.385663 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:19:16.388163 systemd-networkd[758]: eth0: DHCPv6 lease lost Sep 4 17:19:16.390581 ignition[1003]: INFO : Ignition 2.18.0 Sep 4 17:19:16.390581 ignition[1003]: INFO : Stage: umount Sep 4 17:19:16.390581 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:19:16.390581 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:19:16.390581 ignition[1003]: INFO : umount: umount passed Sep 4 17:19:16.390581 ignition[1003]: INFO : Ignition finished successfully Sep 4 17:19:16.388336 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:19:16.388487 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:19:16.395651 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:19:16.395784 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:19:16.398366 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:19:16.398511 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:19:16.399773 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:19:16.399912 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:19:16.405003 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:19:16.405054 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:19:16.405366 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:19:16.405412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:19:16.405719 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:19:16.405769 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:19:16.406231 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:19:16.406272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:19:16.406727 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:19:16.406779 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:19:16.424024 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:19:16.425056 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:19:16.425129 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:19:16.427349 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:19:16.427403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:19:16.429690 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:19:16.429740 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:19:16.432143 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:19:16.432193 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:19:16.434525 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:19:16.438730 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:19:16.439382 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:19:16.439493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:19:16.453531 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:19:16.478208 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:19:16.481155 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:19:16.482182 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:19:16.484769 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:19:16.485883 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:19:16.488060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:19:16.488105 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:19:16.491108 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:19:16.491167 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:19:16.494390 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:19:16.494444 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:19:16.497471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:19:16.497528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:19:16.509981 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:19:16.512283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:19:16.513373 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:19:16.515899 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 4 17:19:16.515958 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:19:16.519925 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:19:16.519979 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:19:16.523341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:19:16.524484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:19:16.527131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:19:16.528292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:19:16.916878 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:19:16.918120 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:19:16.920471 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:19:16.922835 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:19:16.922910 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:19:16.932010 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:19:16.942694 systemd[1]: Switching root. Sep 4 17:19:16.977559 systemd-journald[192]: Journal stopped Sep 4 17:19:18.616914 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 4 17:19:18.616976 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:19:18.616993 kernel: SELinux: policy capability open_perms=1 Sep 4 17:19:18.617007 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:19:18.617021 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:19:18.617034 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:19:18.617054 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:19:18.617068 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:19:18.617082 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:19:18.617096 kernel: audit: type=1403 audit(1725470357.818:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:19:18.617111 systemd[1]: Successfully loaded SELinux policy in 47.698ms. Sep 4 17:19:18.617130 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.280ms. Sep 4 17:19:18.617143 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:19:18.617155 systemd[1]: Detected virtualization kvm. Sep 4 17:19:18.617167 systemd[1]: Detected architecture x86-64. Sep 4 17:19:18.617181 systemd[1]: Detected first boot. Sep 4 17:19:18.617193 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:19:18.617205 zram_generator::config[1047]: No configuration found. Sep 4 17:19:18.617218 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:19:18.617230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:19:18.617246 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:19:18.617258 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:19:18.617271 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:19:18.617285 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:19:18.617297 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:19:18.617309 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:19:18.617321 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:19:18.617338 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:19:18.617350 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:19:18.617362 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:19:18.617378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:19:18.617395 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:19:18.617409 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:19:18.617421 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:19:18.617434 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:19:18.617446 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:19:18.617458 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:19:18.617470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:19:18.617482 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:19:18.617494 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:19:18.617506 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:19:18.617520 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:19:18.617533 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:19:18.617544 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:19:18.617556 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:19:18.617568 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:19:18.617580 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:19:18.617592 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:19:18.617606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:19:18.617621 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:19:18.617633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:19:18.617645 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:19:18.617657 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:19:18.617669 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:19:18.617681 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:19:18.617693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:18.617718 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:19:18.617730 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:19:18.617744 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:19:18.617757 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:19:18.617769 systemd[1]: Reached target machines.target - Containers. Sep 4 17:19:18.617781 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:19:18.617794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:19:18.617806 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:19:18.617839 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:19:18.617851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:19:18.617867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:19:18.617878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:19:18.617890 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:19:18.617903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:19:18.617916 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:19:18.617927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:19:18.617940 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:19:18.617952 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:19:18.617966 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:19:18.617978 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:19:18.617990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:19:18.618001 kernel: loop: module loaded Sep 4 17:19:18.618013 kernel: fuse: init (API version 7.39) Sep 4 17:19:18.618024 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:19:18.618054 systemd-journald[1109]: Collecting audit messages is disabled. Sep 4 17:19:18.618076 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:19:18.618091 systemd-journald[1109]: Journal started Sep 4 17:19:18.618118 systemd-journald[1109]: Runtime Journal (/run/log/journal/c7a9e19e3b2644f1836141a47b28b07b) is 6.0M, max 48.3M, 42.3M free. Sep 4 17:19:18.390461 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:19:18.411755 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:19:18.412253 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:19:18.629092 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:19:18.631377 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:19:18.631427 systemd[1]: Stopped verity-setup.service. Sep 4 17:19:18.635573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:18.636563 kernel: ACPI: bus type drm_connector registered Sep 4 17:19:18.636625 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:19:18.638858 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:19:18.640028 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:19:18.641350 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:19:18.642426 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:19:18.643766 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:19:18.644964 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:19:18.646153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:19:18.647707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:19:18.647891 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:19:18.649383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:19:18.649546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:19:18.651016 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:19:18.651182 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:19:18.652571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:19:18.652742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:19:18.654315 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:19:18.654483 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:19:18.655884 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:19:18.656047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:19:18.657405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:19:18.658777 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:19:18.660518 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:19:18.676199 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:19:18.689917 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:19:18.692529 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:19:18.693960 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:19:18.693987 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:19:18.703858 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:19:18.706757 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:19:18.711984 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:19:18.713592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:19:18.719534 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:19:18.725858 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:19:18.727410 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:19:18.729049 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:19:18.730868 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:19:18.734995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:19:18.742282 systemd-journald[1109]: Time spent on flushing to /var/log/journal/c7a9e19e3b2644f1836141a47b28b07b is 16.628ms for 985 entries. Sep 4 17:19:18.742282 systemd-journald[1109]: System Journal (/var/log/journal/c7a9e19e3b2644f1836141a47b28b07b) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:19:18.849343 systemd-journald[1109]: Received client request to flush runtime journal. Sep 4 17:19:18.849375 kernel: loop0: detected capacity change from 0 to 209816 Sep 4 17:19:18.849398 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:19:18.849480 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:19:18.849494 kernel: loop1: detected capacity change from 0 to 80568 Sep 4 17:19:18.746015 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:19:18.748728 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:19:18.751800 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:19:18.753341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:19:18.754856 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:19:18.756365 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:19:18.758330 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:19:18.781069 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:19:18.789216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:19:18.798213 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:19:18.812973 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Sep 4 17:19:18.812990 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Sep 4 17:19:18.820010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:19:18.827108 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:19:18.832168 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:19:18.833507 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:19:18.836473 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:19:18.851752 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:19:18.899116 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:19:18.905974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:19:18.916837 kernel: loop2: detected capacity change from 0 to 139904 Sep 4 17:19:18.926041 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 4 17:19:18.926063 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 4 17:19:18.931927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:19:18.964443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:19:18.967170 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:19:18.974845 kernel: loop3: detected capacity change from 0 to 209816 Sep 4 17:19:18.983845 kernel: loop4: detected capacity change from 0 to 80568 Sep 4 17:19:18.989832 kernel: loop5: detected capacity change from 0 to 139904 Sep 4 17:19:18.998178 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:19:18.998930 (sd-merge)[1188]: Merged extensions into '/usr'. Sep 4 17:19:19.003905 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:19:19.003919 systemd[1]: Reloading... Sep 4 17:19:19.065008 zram_generator::config[1212]: No configuration found. Sep 4 17:19:19.154610 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:19:19.196416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:19:19.245133 systemd[1]: Reloading finished in 240 ms. Sep 4 17:19:19.278343 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:19:19.279902 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:19:19.295089 systemd[1]: Starting ensure-sysext.service... Sep 4 17:19:19.297770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:19:19.303921 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:19:19.303938 systemd[1]: Reloading... Sep 4 17:19:19.322318 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:19:19.322802 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:19:19.323835 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:19:19.324183 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 4 17:19:19.324283 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 4 17:19:19.328460 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:19:19.328474 systemd-tmpfiles[1250]: Skipping /boot Sep 4 17:19:19.343822 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:19:19.343838 systemd-tmpfiles[1250]: Skipping /boot Sep 4 17:19:19.357890 zram_generator::config[1273]: No configuration found. Sep 4 17:19:19.469771 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:19:19.518383 systemd[1]: Reloading finished in 214 ms. Sep 4 17:19:19.536049 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:19:19.549363 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:19:19.558014 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:19:19.561082 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:19:19.563908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:19:19.568995 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:19:19.572720 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:19:19.577458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:19:19.580832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:19.580997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:19:19.584565 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:19:19.592113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:19:19.594957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:19:19.596997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:19:19.607195 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:19:19.608432 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:19.610006 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:19:19.612163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:19:19.612397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:19:19.614536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:19:19.615130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:19:19.617187 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:19:19.617852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:19:19.622571 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Sep 4 17:19:19.627582 augenrules[1341]: No rules Sep 4 17:19:19.628324 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:19:19.633586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:19.634039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:19:19.640106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:19:19.644084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:19:19.651790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:19:19.653205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:19:19.655902 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:19:19.657380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:19.658562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:19:19.664121 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:19:19.669430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:19:19.670260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:19:19.672535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:19:19.672766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:19:19.675036 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:19:19.675269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:19:19.685466 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:19:19.693842 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:19:19.694862 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Sep 4 17:19:19.704610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:19:19.706857 systemd[1]: Finished ensure-sysext.service. Sep 4 17:19:19.719722 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:19.719937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:19:19.722050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:19:19.727492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:19:19.736850 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1366) Sep 4 17:19:19.742149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:19:19.750046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:19:19.751498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:19:19.755047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:19:19.767014 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:19:19.768630 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:19:19.768665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 17:19:19.769403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:19:19.769629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:19:19.772375 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:19:19.772592 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:19:19.774746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:19:19.774973 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:19:19.776978 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:19:19.777186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:19:19.778925 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:19:19.799865 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 17:19:19.808496 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Sep 4 17:19:19.819827 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 4 17:19:19.819860 kernel: ACPI: button: Power Button [PWRF] Sep 4 17:19:19.822694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:19:19.828412 systemd-resolved[1318]: Positive Trust Anchors: Sep 4 17:19:19.828437 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:19:19.828478 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:19:19.833249 systemd-resolved[1318]: Defaulting to hostname 'linux'. Sep 4 17:19:19.835013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:19:19.836560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:19:19.836640 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:19:19.836877 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:19:19.838285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:19:19.889146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:19:19.891343 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:19:19.896306 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:19:19.897137 systemd-networkd[1396]: lo: Link UP Sep 4 17:19:19.897145 systemd-networkd[1396]: lo: Gained carrier Sep 4 17:19:19.900594 systemd-networkd[1396]: Enumeration completed Sep 4 17:19:19.901767 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:19:19.902214 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:19:19.902275 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:19:19.903170 systemd-networkd[1396]: eth0: Link UP Sep 4 17:19:19.903235 systemd-networkd[1396]: eth0: Gained carrier Sep 4 17:19:19.903282 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:19:19.906998 systemd[1]: Reached target network.target - Network. Sep 4 17:19:19.908856 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:19:19.921123 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:19:19.925183 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:19:19.925338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:19:19.925611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:19:19.927929 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Sep 4 17:19:19.928617 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:19:19.928679 systemd-timesyncd[1398]: Initial clock synchronization to Wed 2024-09-04 17:19:20.049707 UTC. Sep 4 17:19:19.929154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:19:19.978829 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:19:19.990292 kernel: kvm_amd: TSC scaling supported Sep 4 17:19:19.990332 kernel: kvm_amd: Nested Virtualization enabled Sep 4 17:19:19.990348 kernel: kvm_amd: Nested Paging enabled Sep 4 17:19:19.990864 kernel: kvm_amd: LBR virtualization supported Sep 4 17:19:19.992219 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 17:19:19.992253 kernel: kvm_amd: Virtual GIF supported Sep 4 17:19:20.015853 kernel: EDAC MC: Ver: 3.0.0 Sep 4 17:19:20.027309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:19:20.042060 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:19:20.057980 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:19:20.065808 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:19:20.096020 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:19:20.097634 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:19:20.098866 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:19:20.100109 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:19:20.101477 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:19:20.103043 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:19:20.104458 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:19:20.105774 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:19:20.107116 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:19:20.107150 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:19:20.108159 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:19:20.109869 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:19:20.112531 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:19:20.120051 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:19:20.122664 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:19:20.124353 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:19:20.125589 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:19:20.126625 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:19:20.127660 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:19:20.127695 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:19:20.128711 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:19:20.130913 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:19:20.133993 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:19:20.134785 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:19:20.138181 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:19:20.140183 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:19:20.143242 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:19:20.146941 jq[1430]: false Sep 4 17:19:20.149068 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:19:20.152992 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:19:20.158122 extend-filesystems[1431]: Found loop3 Sep 4 17:19:20.158122 extend-filesystems[1431]: Found loop4 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found loop5 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found sr0 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda1 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda2 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda3 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found usr Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda4 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda6 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda7 Sep 4 17:19:20.163428 extend-filesystems[1431]: Found vda9 Sep 4 17:19:20.163428 extend-filesystems[1431]: Checking size of /dev/vda9 Sep 4 17:19:20.158138 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:19:20.189070 extend-filesystems[1431]: Resized partition /dev/vda9 Sep 4 17:19:20.191814 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:19:20.168915 dbus-daemon[1429]: [system] SELinux support is enabled Sep 4 17:19:20.165021 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:19:20.207859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1370) Sep 4 17:19:20.207958 extend-filesystems[1449]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:19:20.166940 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:19:20.170248 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:19:20.171221 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:19:20.173316 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:19:20.211224 jq[1446]: true Sep 4 17:19:20.177178 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:19:20.189246 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:19:20.200423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:19:20.201984 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:19:20.202427 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:19:20.202682 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:19:20.205412 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:19:20.205647 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:19:20.220399 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:19:20.221856 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:19:20.228217 update_engine[1445]: I0904 17:19:20.227617 1445 main.cc:92] Flatcar Update Engine starting Sep 4 17:19:20.239463 update_engine[1445]: I0904 17:19:20.239408 1445 update_check_scheduler.cc:74] Next update check in 2m47s Sep 4 17:19:20.249056 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:19:20.249056 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:19:20.249056 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:19:20.258085 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Sep 4 17:19:20.251703 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:19:20.251935 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:19:20.256859 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:19:20.259013 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 17:19:20.259032 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:19:20.260209 jq[1456]: true Sep 4 17:19:20.261512 systemd-logind[1440]: New seat seat0. Sep 4 17:19:20.269912 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:19:20.276010 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:19:20.276159 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:19:20.277216 tar[1454]: linux-amd64/helm Sep 4 17:19:20.278999 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:19:20.279112 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:19:20.291088 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:19:20.325708 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:19:20.326912 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:19:20.328211 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:19:20.331180 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:19:20.430070 containerd[1457]: time="2024-09-04T17:19:20.429984759Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:19:20.432004 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:19:20.455160 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:19:20.459659 containerd[1457]: time="2024-09-04T17:19:20.459623501Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:19:20.459769 containerd[1457]: time="2024-09-04T17:19:20.459666556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:19:20.461082 containerd[1457]: time="2024-09-04T17:19:20.461054857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462625 containerd[1457]: time="2024-09-04T17:19:20.461136261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462625 containerd[1457]: time="2024-09-04T17:19:20.461379018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462625 containerd[1457]: time="2024-09-04T17:19:20.461394059Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:19:20.462625 containerd[1457]: time="2024-09-04T17:19:20.461485255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462625 containerd[1457]: time="2024-09-04T17:19:20.461546045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462625 containerd[1457]: time="2024-09-04T17:19:20.461557603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462625 containerd[1457]: time="2024-09-04T17:19:20.461640167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462060 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:19:20.462816 containerd[1457]: time="2024-09-04T17:19:20.462769914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:19:20.462816 containerd[1457]: time="2024-09-04T17:19:20.462789962Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:19:20.462816 containerd[1457]: time="2024-09-04T17:19:20.462800441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:19:20.463203 containerd[1457]: time="2024-09-04T17:19:20.462940625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:19:20.463203 containerd[1457]: time="2024-09-04T17:19:20.462956151Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:19:20.463203 containerd[1457]: time="2024-09-04T17:19:20.463020101Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:19:20.463203 containerd[1457]: time="2024-09-04T17:19:20.463032053Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:19:20.469597 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:19:20.469695 containerd[1457]: time="2024-09-04T17:19:20.469670429Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:19:20.469722 containerd[1457]: time="2024-09-04T17:19:20.469697492Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:19:20.469722 containerd[1457]: time="2024-09-04T17:19:20.469710928Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:19:20.469766 containerd[1457]: time="2024-09-04T17:19:20.469740678Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:19:20.469766 containerd[1457]: time="2024-09-04T17:19:20.469754488Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:19:20.469805 containerd[1457]: time="2024-09-04T17:19:20.469765107Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:19:20.469805 containerd[1457]: time="2024-09-04T17:19:20.469775908Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:19:20.469834 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:19:20.469949 containerd[1457]: time="2024-09-04T17:19:20.469899760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:19:20.469949 containerd[1457]: time="2024-09-04T17:19:20.469918324Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:19:20.469949 containerd[1457]: time="2024-09-04T17:19:20.469930013Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:19:20.469949 containerd[1457]: time="2024-09-04T17:19:20.469943117Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:19:20.470029 containerd[1457]: time="2024-09-04T17:19:20.469955867Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470029 containerd[1457]: time="2024-09-04T17:19:20.469971594Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470029 containerd[1457]: time="2024-09-04T17:19:20.469985676Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470029 containerd[1457]: time="2024-09-04T17:19:20.469997567Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470029 containerd[1457]: time="2024-09-04T17:19:20.470011094Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470029 containerd[1457]: time="2024-09-04T17:19:20.470023379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470135 containerd[1457]: time="2024-09-04T17:19:20.470034735Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470135 containerd[1457]: time="2024-09-04T17:19:20.470045204Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:19:20.470169 containerd[1457]: time="2024-09-04T17:19:20.470155922Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:19:20.470387 containerd[1457]: time="2024-09-04T17:19:20.470371596Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:19:20.470421 containerd[1457]: time="2024-09-04T17:19:20.470397145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470421 containerd[1457]: time="2024-09-04T17:19:20.470409834Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:19:20.470472 containerd[1457]: time="2024-09-04T17:19:20.470429105Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:19:20.470507 containerd[1457]: time="2024-09-04T17:19:20.470492520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470529 containerd[1457]: time="2024-09-04T17:19:20.470508500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470529 containerd[1457]: time="2024-09-04T17:19:20.470519685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470570 containerd[1457]: time="2024-09-04T17:19:20.470530607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470570 containerd[1457]: time="2024-09-04T17:19:20.470542892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470570 containerd[1457]: time="2024-09-04T17:19:20.470554188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470570 containerd[1457]: time="2024-09-04T17:19:20.470565141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470635 containerd[1457]: time="2024-09-04T17:19:20.470575417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470635 containerd[1457]: time="2024-09-04T17:19:20.470598686Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:19:20.470760 containerd[1457]: time="2024-09-04T17:19:20.470740193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470788 containerd[1457]: time="2024-09-04T17:19:20.470757919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470788 containerd[1457]: time="2024-09-04T17:19:20.470769458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470788 containerd[1457]: time="2024-09-04T17:19:20.470780491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470859 containerd[1457]: time="2024-09-04T17:19:20.470792160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470859 containerd[1457]: time="2024-09-04T17:19:20.470804940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470859 containerd[1457]: time="2024-09-04T17:19:20.470830167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.470859 containerd[1457]: time="2024-09-04T17:19:20.470840574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:19:20.471332 containerd[1457]: time="2024-09-04T17:19:20.471207698Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:19:20.471332 containerd[1457]: time="2024-09-04T17:19:20.471295945Z" level=info msg="Connect containerd service" Sep 4 17:19:20.471332 containerd[1457]: time="2024-09-04T17:19:20.471325100Z" level=info msg="using legacy CRI server" Sep 4 17:19:20.471332 containerd[1457]: time="2024-09-04T17:19:20.471335274Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:19:20.471567 containerd[1457]: time="2024-09-04T17:19:20.471519119Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:19:20.472362 containerd[1457]: time="2024-09-04T17:19:20.472340049Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:19:20.472395 containerd[1457]: time="2024-09-04T17:19:20.472381922Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:19:20.472447 containerd[1457]: time="2024-09-04T17:19:20.472432698Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:19:20.472527 containerd[1457]: time="2024-09-04T17:19:20.472449354Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:19:20.472527 containerd[1457]: time="2024-09-04T17:19:20.472464658Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:19:20.473398 containerd[1457]: time="2024-09-04T17:19:20.473163422Z" level=info msg="Start subscribing containerd event" Sep 4 17:19:20.473398 containerd[1457]: time="2024-09-04T17:19:20.473230097Z" level=info msg="Start recovering state" Sep 4 17:19:20.473398 containerd[1457]: time="2024-09-04T17:19:20.473296106Z" level=info msg="Start event monitor" Sep 4 17:19:20.473398 containerd[1457]: time="2024-09-04T17:19:20.473312551Z" level=info msg="Start snapshots syncer" Sep 4 17:19:20.473398 containerd[1457]: time="2024-09-04T17:19:20.473325311Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:19:20.473398 containerd[1457]: time="2024-09-04T17:19:20.473334153Z" level=info msg="Start streaming server" Sep 4 17:19:20.473824 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:19:20.474779 containerd[1457]: time="2024-09-04T17:19:20.474149814Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:19:20.474779 containerd[1457]: time="2024-09-04T17:19:20.474226009Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:19:20.475808 containerd[1457]: time="2024-09-04T17:19:20.475666480Z" level=info msg="containerd successfully booted in 0.047075s" Sep 4 17:19:20.478748 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:19:20.488443 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:19:20.503108 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:19:20.505333 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:19:20.506643 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:19:20.652884 tar[1454]: linux-amd64/LICENSE Sep 4 17:19:20.652884 tar[1454]: linux-amd64/README.md Sep 4 17:19:20.665963 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:19:21.551420 systemd-networkd[1396]: eth0: Gained IPv6LL Sep 4 17:19:21.555267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:19:21.557516 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:19:21.566065 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:19:21.568621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:21.571256 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:19:21.590071 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:19:21.590370 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:19:21.592275 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:19:21.597561 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:19:22.205269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:22.207114 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:19:22.208546 systemd[1]: Startup finished in 879ms (kernel) + 6.085s (initrd) + 4.436s (userspace) = 11.401s. Sep 4 17:19:22.210913 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:19:22.707926 kubelet[1542]: E0904 17:19:22.707785 1542 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:19:22.713006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:19:22.713237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:19:22.713653 systemd[1]: kubelet.service: Consumed 1.011s CPU time. Sep 4 17:19:26.342252 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:19:26.343617 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:33240.service - OpenSSH per-connection server daemon (10.0.0.1:33240). Sep 4 17:19:26.383157 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 33240 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:19:26.385051 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:26.394046 systemd-logind[1440]: New session 1 of user core. Sep 4 17:19:26.395622 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:19:26.406109 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:19:26.418333 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:19:26.427339 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:19:26.430735 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:26.564445 systemd[1560]: Queued start job for default target default.target. Sep 4 17:19:26.574207 systemd[1560]: Created slice app.slice - User Application Slice. Sep 4 17:19:26.574234 systemd[1560]: Reached target paths.target - Paths. Sep 4 17:19:26.574249 systemd[1560]: Reached target timers.target - Timers. Sep 4 17:19:26.576041 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:19:26.591884 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:19:26.592019 systemd[1560]: Reached target sockets.target - Sockets. Sep 4 17:19:26.592038 systemd[1560]: Reached target basic.target - Basic System. Sep 4 17:19:26.592077 systemd[1560]: Reached target default.target - Main User Target. Sep 4 17:19:26.592111 systemd[1560]: Startup finished in 153ms. Sep 4 17:19:26.592629 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:19:26.594258 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:19:26.656688 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:33246.service - OpenSSH per-connection server daemon (10.0.0.1:33246). Sep 4 17:19:26.695348 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 33246 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:19:26.697518 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:26.701646 systemd-logind[1440]: New session 2 of user core. Sep 4 17:19:26.707999 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:19:26.763025 sshd[1571]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:26.778388 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:33246.service: Deactivated successfully. Sep 4 17:19:26.780462 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:19:26.781842 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:19:26.789228 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:33256.service - OpenSSH per-connection server daemon (10.0.0.1:33256). Sep 4 17:19:26.790337 systemd-logind[1440]: Removed session 2. Sep 4 17:19:26.816749 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33256 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:19:26.818367 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:26.822393 systemd-logind[1440]: New session 3 of user core. Sep 4 17:19:26.832090 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:19:26.884313 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:26.893627 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:33256.service: Deactivated successfully. Sep 4 17:19:26.895164 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:19:26.896595 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:19:26.904226 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:33262.service - OpenSSH per-connection server daemon (10.0.0.1:33262). Sep 4 17:19:26.905300 systemd-logind[1440]: Removed session 3. Sep 4 17:19:26.933165 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 33262 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:19:26.934789 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:26.939020 systemd-logind[1440]: New session 4 of user core. Sep 4 17:19:26.952970 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:19:27.008235 sshd[1585]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:27.017649 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:33262.service: Deactivated successfully. Sep 4 17:19:27.019277 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:19:27.020779 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:19:27.022044 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:33274.service - OpenSSH per-connection server daemon (10.0.0.1:33274). Sep 4 17:19:27.022894 systemd-logind[1440]: Removed session 4. Sep 4 17:19:27.056031 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 33274 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:19:27.057702 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:27.061756 systemd-logind[1440]: New session 5 of user core. Sep 4 17:19:27.077075 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:19:27.138866 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:19:27.139180 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:27.161728 sudo[1596]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:27.165205 sshd[1593]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:27.173306 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:33274.service: Deactivated successfully. Sep 4 17:19:27.175653 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:19:27.177394 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:19:27.194433 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:33284.service - OpenSSH per-connection server daemon (10.0.0.1:33284). Sep 4 17:19:27.195425 systemd-logind[1440]: Removed session 5. Sep 4 17:19:27.224627 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 33284 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:19:27.226321 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:27.230195 systemd-logind[1440]: New session 6 of user core. Sep 4 17:19:27.240033 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:19:27.296382 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:19:27.296759 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:27.300578 sudo[1606]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:27.306404 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:19:27.306750 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:27.327077 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:19:27.328753 auditctl[1609]: No rules Sep 4 17:19:27.330040 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:19:27.330317 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:19:27.332111 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:19:27.361104 augenrules[1627]: No rules Sep 4 17:19:27.362852 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:19:27.364147 sudo[1605]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:27.365907 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:27.377521 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:33284.service: Deactivated successfully. Sep 4 17:19:27.379045 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:19:27.380575 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:19:27.392343 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:33290.service - OpenSSH per-connection server daemon (10.0.0.1:33290). Sep 4 17:19:27.393548 systemd-logind[1440]: Removed session 6. Sep 4 17:19:27.421687 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 33290 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:19:27.423060 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:19:27.426770 systemd-logind[1440]: New session 7 of user core. Sep 4 17:19:27.435930 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:19:27.490129 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:19:27.490428 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:19:27.604207 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:19:27.604221 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:19:27.854169 dockerd[1649]: time="2024-09-04T17:19:27.854022938Z" level=info msg="Starting up" Sep 4 17:19:29.091440 dockerd[1649]: time="2024-09-04T17:19:29.091346967Z" level=info msg="Loading containers: start." Sep 4 17:19:29.248858 kernel: Initializing XFRM netlink socket Sep 4 17:19:29.348203 systemd-networkd[1396]: docker0: Link UP Sep 4 17:19:29.376044 dockerd[1649]: time="2024-09-04T17:19:29.375987940Z" level=info msg="Loading containers: done." Sep 4 17:19:29.431324 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1256225044-merged.mount: Deactivated successfully. Sep 4 17:19:29.434114 dockerd[1649]: time="2024-09-04T17:19:29.434059011Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:19:29.434336 dockerd[1649]: time="2024-09-04T17:19:29.434304154Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:19:29.434494 dockerd[1649]: time="2024-09-04T17:19:29.434467850Z" level=info msg="Daemon has completed initialization" Sep 4 17:19:29.478095 dockerd[1649]: time="2024-09-04T17:19:29.478010925Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:19:29.478322 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:19:30.187417 containerd[1457]: time="2024-09-04T17:19:30.187374485Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:19:30.955350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239124645.mount: Deactivated successfully. Sep 4 17:19:32.695517 containerd[1457]: time="2024-09-04T17:19:32.695454486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:32.696354 containerd[1457]: time="2024-09-04T17:19:32.696311369Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=34530735" Sep 4 17:19:32.697531 containerd[1457]: time="2024-09-04T17:19:32.697498710Z" level=info msg="ImageCreate event name:\"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:32.700413 containerd[1457]: time="2024-09-04T17:19:32.700375981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:32.701692 containerd[1457]: time="2024-09-04T17:19:32.701658290Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"34527535\" in 2.514242218s" Sep 4 17:19:32.701752 containerd[1457]: time="2024-09-04T17:19:32.701693136Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:5447bb21fa283749e558782cbef636f1991732f1b8f345296a5204ccf0b5f7b7\"" Sep 4 17:19:32.732315 containerd[1457]: time="2024-09-04T17:19:32.732264990Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:19:32.963612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:19:32.971068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:33.127572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:33.132979 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:19:33.429738 kubelet[1860]: E0904 17:19:33.429546 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:19:33.437568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:19:33.437794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:19:35.615349 containerd[1457]: time="2024-09-04T17:19:35.615282997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:35.616445 containerd[1457]: time="2024-09-04T17:19:35.616396816Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=31849709" Sep 4 17:19:35.617978 containerd[1457]: time="2024-09-04T17:19:35.617944029Z" level=info msg="ImageCreate event name:\"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:35.622162 containerd[1457]: time="2024-09-04T17:19:35.622111371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:35.623163 containerd[1457]: time="2024-09-04T17:19:35.623123538Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"33399655\" in 2.890815586s" Sep 4 17:19:35.623210 containerd[1457]: time="2024-09-04T17:19:35.623163608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:f1a0a396058d414b391ade9dba6e95d7a71ee665b09fc0fc420126ac21c155a5\"" Sep 4 17:19:35.645754 containerd[1457]: time="2024-09-04T17:19:35.645700856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:19:37.217327 containerd[1457]: time="2024-09-04T17:19:37.217275218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:37.218025 containerd[1457]: time="2024-09-04T17:19:37.217996999Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=17097777" Sep 4 17:19:37.219386 containerd[1457]: time="2024-09-04T17:19:37.219336224Z" level=info msg="ImageCreate event name:\"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:37.223874 containerd[1457]: time="2024-09-04T17:19:37.223807248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:37.225077 containerd[1457]: time="2024-09-04T17:19:37.225036992Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"18647741\" in 1.579292683s" Sep 4 17:19:37.225161 containerd[1457]: time="2024-09-04T17:19:37.225075391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:a60f64c0f37d085a5fcafef1b2a7adc9be95184dae7d8a5d1dbf6ca4681d328a\"" Sep 4 17:19:37.249867 containerd[1457]: time="2024-09-04T17:19:37.249823308Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:19:38.492917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3389728382.mount: Deactivated successfully. Sep 4 17:19:39.281748 containerd[1457]: time="2024-09-04T17:19:39.281676840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:39.282597 containerd[1457]: time="2024-09-04T17:19:39.282555915Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=28303449" Sep 4 17:19:39.283692 containerd[1457]: time="2024-09-04T17:19:39.283653212Z" level=info msg="ImageCreate event name:\"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:39.286282 containerd[1457]: time="2024-09-04T17:19:39.286252281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:39.287018 containerd[1457]: time="2024-09-04T17:19:39.286971221Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"28302468\" in 2.037104447s" Sep 4 17:19:39.287061 containerd[1457]: time="2024-09-04T17:19:39.287016427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:31fde28e72a31599555ab5aba850caa90b9254b760b1007bfb662d086bb672fc\"" Sep 4 17:19:39.310305 containerd[1457]: time="2024-09-04T17:19:39.310261370Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:19:39.805271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078070822.mount: Deactivated successfully. Sep 4 17:19:39.811741 containerd[1457]: time="2024-09-04T17:19:39.811692833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:39.812474 containerd[1457]: time="2024-09-04T17:19:39.812418200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Sep 4 17:19:39.813641 containerd[1457]: time="2024-09-04T17:19:39.813605916Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:39.816435 containerd[1457]: time="2024-09-04T17:19:39.816402263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:39.817342 containerd[1457]: time="2024-09-04T17:19:39.817293788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 506.995979ms" Sep 4 17:19:39.817387 containerd[1457]: time="2024-09-04T17:19:39.817338572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Sep 4 17:19:39.840040 containerd[1457]: time="2024-09-04T17:19:39.839779928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:19:40.383688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542752677.mount: Deactivated successfully. Sep 4 17:19:43.598626 containerd[1457]: time="2024-09-04T17:19:43.598568798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:43.599630 containerd[1457]: time="2024-09-04T17:19:43.599555467Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Sep 4 17:19:43.609151 containerd[1457]: time="2024-09-04T17:19:43.609089183Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:43.615223 containerd[1457]: time="2024-09-04T17:19:43.615166372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:43.617039 containerd[1457]: time="2024-09-04T17:19:43.616998012Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.77717889s" Sep 4 17:19:43.617100 containerd[1457]: time="2024-09-04T17:19:43.617039139Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Sep 4 17:19:43.647943 containerd[1457]: time="2024-09-04T17:19:43.647890983Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:19:43.688361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:19:43.698235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:43.844408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:43.850114 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:19:43.993464 kubelet[1976]: E0904 17:19:43.993313 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:19:43.998256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:19:43.998463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:19:44.551998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048438747.mount: Deactivated successfully. Sep 4 17:19:45.220776 containerd[1457]: time="2024-09-04T17:19:45.220714100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:45.221543 containerd[1457]: time="2024-09-04T17:19:45.221475392Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Sep 4 17:19:45.222787 containerd[1457]: time="2024-09-04T17:19:45.222754577Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:45.225150 containerd[1457]: time="2024-09-04T17:19:45.225111072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:45.225828 containerd[1457]: time="2024-09-04T17:19:45.225781256Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.577842645s" Sep 4 17:19:45.225866 containerd[1457]: time="2024-09-04T17:19:45.225832329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Sep 4 17:19:47.991976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:48.011186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:48.039016 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit session-7.scope)... Sep 4 17:19:48.039037 systemd[1]: Reloading... Sep 4 17:19:48.173123 zram_generator::config[2107]: No configuration found. Sep 4 17:19:48.960744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:19:49.048255 systemd[1]: Reloading finished in 1008 ms. Sep 4 17:19:49.095240 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:19:49.095356 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:19:49.095673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:49.097483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:49.246106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:49.251741 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:19:49.306775 kubelet[2156]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:49.306775 kubelet[2156]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:19:49.306775 kubelet[2156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:49.307253 kubelet[2156]: I0904 17:19:49.306825 2156 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:19:49.771587 kubelet[2156]: I0904 17:19:49.771536 2156 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:19:49.771587 kubelet[2156]: I0904 17:19:49.771570 2156 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:19:49.771845 kubelet[2156]: I0904 17:19:49.771804 2156 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:19:49.787110 kubelet[2156]: I0904 17:19:49.787043 2156 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:19:49.788586 kubelet[2156]: E0904 17:19:49.788546 2156 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.847802 kubelet[2156]: I0904 17:19:49.847740 2156 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:19:49.849659 kubelet[2156]: I0904 17:19:49.849539 2156 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:19:49.850401 kubelet[2156]: I0904 17:19:49.850127 2156 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:19:49.850401 kubelet[2156]: I0904 17:19:49.850162 2156 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:19:49.850401 kubelet[2156]: I0904 17:19:49.850182 2156 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:19:49.851203 kubelet[2156]: I0904 17:19:49.851100 2156 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:49.852676 kubelet[2156]: I0904 17:19:49.852329 2156 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:19:49.852676 kubelet[2156]: I0904 17:19:49.852379 2156 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:19:49.852676 kubelet[2156]: I0904 17:19:49.852413 2156 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:19:49.852676 kubelet[2156]: I0904 17:19:49.852431 2156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:19:49.853190 kubelet[2156]: W0904 17:19:49.853125 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.853190 kubelet[2156]: E0904 17:19:49.853187 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.853635 kubelet[2156]: W0904 17:19:49.853604 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.853685 kubelet[2156]: E0904 17:19:49.853638 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.854143 kubelet[2156]: I0904 17:19:49.854124 2156 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:19:49.857285 kubelet[2156]: W0904 17:19:49.857260 2156 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:19:49.857971 kubelet[2156]: I0904 17:19:49.857936 2156 server.go:1232] "Started kubelet" Sep 4 17:19:49.859940 kubelet[2156]: I0904 17:19:49.858277 2156 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:19:49.859940 kubelet[2156]: I0904 17:19:49.859143 2156 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:19:49.859940 kubelet[2156]: I0904 17:19:49.859573 2156 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:19:49.859940 kubelet[2156]: I0904 17:19:49.859857 2156 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:19:49.861203 kubelet[2156]: E0904 17:19:49.861147 2156 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:19:49.861203 kubelet[2156]: E0904 17:19:49.861169 2156 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:19:49.861736 kubelet[2156]: E0904 17:19:49.861640 2156 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f21a2d79ef776c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 19, 49, 857908588, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 19, 49, 857908588, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.43:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.43:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:19:49.862690 kubelet[2156]: I0904 17:19:49.862673 2156 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:19:49.863585 kubelet[2156]: E0904 17:19:49.863097 2156 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:19:49.863585 kubelet[2156]: I0904 17:19:49.863130 2156 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:19:49.863585 kubelet[2156]: I0904 17:19:49.863195 2156 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:19:49.863585 kubelet[2156]: I0904 17:19:49.863245 2156 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:19:49.863585 kubelet[2156]: W0904 17:19:49.863461 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.863585 kubelet[2156]: E0904 17:19:49.863491 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.864643 kubelet[2156]: E0904 17:19:49.864620 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Sep 4 17:19:49.881341 kubelet[2156]: I0904 17:19:49.881307 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:19:49.883542 kubelet[2156]: I0904 17:19:49.883528 2156 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:19:49.883633 kubelet[2156]: I0904 17:19:49.883622 2156 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:19:49.883719 kubelet[2156]: I0904 17:19:49.883707 2156 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:19:49.883878 kubelet[2156]: E0904 17:19:49.883866 2156 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:19:49.884709 kubelet[2156]: W0904 17:19:49.884642 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.884779 kubelet[2156]: E0904 17:19:49.884715 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:49.905856 kubelet[2156]: I0904 17:19:49.905803 2156 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:19:49.905856 kubelet[2156]: I0904 17:19:49.905844 2156 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:19:49.905856 kubelet[2156]: I0904 17:19:49.905862 2156 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:49.937651 kubelet[2156]: I0904 17:19:49.937631 2156 policy_none.go:49] "None policy: Start" Sep 4 17:19:49.938158 kubelet[2156]: I0904 17:19:49.938138 2156 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:19:49.938158 kubelet[2156]: I0904 17:19:49.938160 2156 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:19:49.953776 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:19:49.964214 kubelet[2156]: I0904 17:19:49.964196 2156 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:49.964546 kubelet[2156]: E0904 17:19:49.964526 2156 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 4 17:19:49.967110 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:19:49.970216 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:19:49.981631 kubelet[2156]: I0904 17:19:49.981606 2156 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:19:49.982034 kubelet[2156]: I0904 17:19:49.981887 2156 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:19:49.982323 kubelet[2156]: E0904 17:19:49.982298 2156 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:19:49.984443 kubelet[2156]: I0904 17:19:49.984426 2156 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:19:49.985271 kubelet[2156]: I0904 17:19:49.985248 2156 topology_manager.go:215] "Topology Admit Handler" podUID="c825f0bf657b36fc191df194d537c68e" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:19:49.986068 kubelet[2156]: I0904 17:19:49.986054 2156 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:19:49.991537 systemd[1]: Created slice kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice - libcontainer container kubepods-burstable-podcacd2a680dbc59f99275412e0ba6e38b.slice. Sep 4 17:19:50.010996 systemd[1]: Created slice kubepods-burstable-podc825f0bf657b36fc191df194d537c68e.slice - libcontainer container kubepods-burstable-podc825f0bf657b36fc191df194d537c68e.slice. Sep 4 17:19:50.029100 systemd[1]: Created slice kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice - libcontainer container kubepods-burstable-podf5bf8d52acd7337c82951a97b42c345d.slice. Sep 4 17:19:50.065166 kubelet[2156]: E0904 17:19:50.065127 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Sep 4 17:19:50.164555 kubelet[2156]: I0904 17:19:50.164527 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c825f0bf657b36fc191df194d537c68e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c825f0bf657b36fc191df194d537c68e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:50.164619 kubelet[2156]: I0904 17:19:50.164562 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:50.164619 kubelet[2156]: I0904 17:19:50.164581 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:50.164619 kubelet[2156]: I0904 17:19:50.164602 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:50.164619 kubelet[2156]: I0904 17:19:50.164620 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:19:50.164769 kubelet[2156]: I0904 17:19:50.164648 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c825f0bf657b36fc191df194d537c68e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c825f0bf657b36fc191df194d537c68e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:50.164769 kubelet[2156]: I0904 17:19:50.164668 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c825f0bf657b36fc191df194d537c68e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c825f0bf657b36fc191df194d537c68e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:50.164769 kubelet[2156]: I0904 17:19:50.164690 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:50.164769 kubelet[2156]: I0904 17:19:50.164717 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:50.165676 kubelet[2156]: I0904 17:19:50.165652 2156 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:50.165962 kubelet[2156]: E0904 17:19:50.165934 2156 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 4 17:19:50.309703 kubelet[2156]: E0904 17:19:50.309591 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:50.310367 containerd[1457]: time="2024-09-04T17:19:50.310311869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:50.326569 kubelet[2156]: E0904 17:19:50.326539 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:50.326994 containerd[1457]: time="2024-09-04T17:19:50.326963907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c825f0bf657b36fc191df194d537c68e,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:50.331268 kubelet[2156]: E0904 17:19:50.331248 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:50.331517 containerd[1457]: time="2024-09-04T17:19:50.331490753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:50.465636 kubelet[2156]: E0904 17:19:50.465605 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Sep 4 17:19:50.567319 kubelet[2156]: I0904 17:19:50.567199 2156 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:50.567516 kubelet[2156]: E0904 17:19:50.567502 2156 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 4 17:19:51.022413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086915036.mount: Deactivated successfully. Sep 4 17:19:51.034893 kubelet[2156]: W0904 17:19:51.034806 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.034893 kubelet[2156]: E0904 17:19:51.034891 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.173225 kubelet[2156]: W0904 17:19:51.173149 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.173225 kubelet[2156]: E0904 17:19:51.173226 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.194400 containerd[1457]: time="2024-09-04T17:19:51.194333642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:51.216629 containerd[1457]: time="2024-09-04T17:19:51.216502624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 17:19:51.225742 containerd[1457]: time="2024-09-04T17:19:51.225687941Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:51.237685 containerd[1457]: time="2024-09-04T17:19:51.237630138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:19:51.249454 containerd[1457]: time="2024-09-04T17:19:51.249400427Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:51.262885 containerd[1457]: time="2024-09-04T17:19:51.262795903Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:51.266805 kubelet[2156]: E0904 17:19:51.266778 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Sep 4 17:19:51.271687 containerd[1457]: time="2024-09-04T17:19:51.271630555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:19:51.282932 containerd[1457]: time="2024-09-04T17:19:51.282811346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:19:51.283700 containerd[1457]: time="2024-09-04T17:19:51.283661388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 973.213928ms" Sep 4 17:19:51.284996 containerd[1457]: time="2024-09-04T17:19:51.284937226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 957.889441ms" Sep 4 17:19:51.316180 containerd[1457]: time="2024-09-04T17:19:51.316126798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 984.557783ms" Sep 4 17:19:51.326738 kubelet[2156]: W0904 17:19:51.326675 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.326738 kubelet[2156]: E0904 17:19:51.326736 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.326738 kubelet[2156]: W0904 17:19:51.326675 2156 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.326738 kubelet[2156]: E0904 17:19:51.326759 2156 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.369443 kubelet[2156]: I0904 17:19:51.369409 2156 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:51.369906 kubelet[2156]: E0904 17:19:51.369873 2156 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 4 17:19:51.811532 kubelet[2156]: E0904 17:19:51.811499 2156 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Sep 4 17:19:51.933495 containerd[1457]: time="2024-09-04T17:19:51.933385854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:51.934600 containerd[1457]: time="2024-09-04T17:19:51.933525919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:51.934600 containerd[1457]: time="2024-09-04T17:19:51.933621657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:51.934600 containerd[1457]: time="2024-09-04T17:19:51.933843259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:51.934600 containerd[1457]: time="2024-09-04T17:19:51.934314735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:51.934600 containerd[1457]: time="2024-09-04T17:19:51.934452244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:51.935446 containerd[1457]: time="2024-09-04T17:19:51.935379611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:51.935446 containerd[1457]: time="2024-09-04T17:19:51.935411674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:51.943924 containerd[1457]: time="2024-09-04T17:19:51.942886385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:51.943924 containerd[1457]: time="2024-09-04T17:19:51.942953449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:51.943924 containerd[1457]: time="2024-09-04T17:19:51.942976716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:51.943924 containerd[1457]: time="2024-09-04T17:19:51.942994438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:51.956075 systemd[1]: Started cri-containerd-3781e842d947416121f33a3706dbd8076ec04fed97b3f5522ed561dfd7be7319.scope - libcontainer container 3781e842d947416121f33a3706dbd8076ec04fed97b3f5522ed561dfd7be7319. Sep 4 17:19:51.961936 systemd[1]: Started cri-containerd-572de7e4aed719b76c4072f4ae820cdf3d962290763f50f89d56b1c7e438b304.scope - libcontainer container 572de7e4aed719b76c4072f4ae820cdf3d962290763f50f89d56b1c7e438b304. Sep 4 17:19:51.966877 systemd[1]: Started cri-containerd-7b490ccc7caecfb02730358090442409733561ffb5a14a1afe2ed755caaff905.scope - libcontainer container 7b490ccc7caecfb02730358090442409733561ffb5a14a1afe2ed755caaff905. Sep 4 17:19:52.006902 containerd[1457]: time="2024-09-04T17:19:52.006736774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c825f0bf657b36fc191df194d537c68e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3781e842d947416121f33a3706dbd8076ec04fed97b3f5522ed561dfd7be7319\"" Sep 4 17:19:52.008764 kubelet[2156]: E0904 17:19:52.008731 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:52.010098 containerd[1457]: time="2024-09-04T17:19:52.010064566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,} returns sandbox id \"572de7e4aed719b76c4072f4ae820cdf3d962290763f50f89d56b1c7e438b304\"" Sep 4 17:19:52.011205 kubelet[2156]: E0904 17:19:52.011173 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:52.012002 containerd[1457]: time="2024-09-04T17:19:52.011956638Z" level=info msg="CreateContainer within sandbox \"3781e842d947416121f33a3706dbd8076ec04fed97b3f5522ed561dfd7be7319\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:19:52.012055 containerd[1457]: time="2024-09-04T17:19:52.012000261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b490ccc7caecfb02730358090442409733561ffb5a14a1afe2ed755caaff905\"" Sep 4 17:19:52.012539 kubelet[2156]: E0904 17:19:52.012513 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:52.013214 containerd[1457]: time="2024-09-04T17:19:52.013180346Z" level=info msg="CreateContainer within sandbox \"572de7e4aed719b76c4072f4ae820cdf3d962290763f50f89d56b1c7e438b304\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:19:52.015441 containerd[1457]: time="2024-09-04T17:19:52.015376600Z" level=info msg="CreateContainer within sandbox \"7b490ccc7caecfb02730358090442409733561ffb5a14a1afe2ed755caaff905\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:19:52.290429 containerd[1457]: time="2024-09-04T17:19:52.290276629Z" level=info msg="CreateContainer within sandbox \"3781e842d947416121f33a3706dbd8076ec04fed97b3f5522ed561dfd7be7319\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7dc922adc7a4fcd0789da4d31e26465233e57fe9b3a08701efe904adefa93619\"" Sep 4 17:19:52.291744 containerd[1457]: time="2024-09-04T17:19:52.291715907Z" level=info msg="StartContainer for \"7dc922adc7a4fcd0789da4d31e26465233e57fe9b3a08701efe904adefa93619\"" Sep 4 17:19:52.310023 containerd[1457]: time="2024-09-04T17:19:52.309968426Z" level=info msg="CreateContainer within sandbox \"572de7e4aed719b76c4072f4ae820cdf3d962290763f50f89d56b1c7e438b304\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9219ba69f3f7a5f57964a570436d34d095c38b057bc6e062651d0d544cfc817\"" Sep 4 17:19:52.311641 containerd[1457]: time="2024-09-04T17:19:52.310649507Z" level=info msg="StartContainer for \"f9219ba69f3f7a5f57964a570436d34d095c38b057bc6e062651d0d544cfc817\"" Sep 4 17:19:52.318354 systemd[1]: Started cri-containerd-7dc922adc7a4fcd0789da4d31e26465233e57fe9b3a08701efe904adefa93619.scope - libcontainer container 7dc922adc7a4fcd0789da4d31e26465233e57fe9b3a08701efe904adefa93619. Sep 4 17:19:52.343025 systemd[1]: Started cri-containerd-f9219ba69f3f7a5f57964a570436d34d095c38b057bc6e062651d0d544cfc817.scope - libcontainer container f9219ba69f3f7a5f57964a570436d34d095c38b057bc6e062651d0d544cfc817. Sep 4 17:19:52.343345 containerd[1457]: time="2024-09-04T17:19:52.343208189Z" level=info msg="CreateContainer within sandbox \"7b490ccc7caecfb02730358090442409733561ffb5a14a1afe2ed755caaff905\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7c13773b5532a2146362d30abf8bdedad3c3a46abd1ea38b62f480df69fb419f\"" Sep 4 17:19:52.344021 containerd[1457]: time="2024-09-04T17:19:52.343942069Z" level=info msg="StartContainer for \"7c13773b5532a2146362d30abf8bdedad3c3a46abd1ea38b62f480df69fb419f\"" Sep 4 17:19:52.372627 systemd[1]: Started cri-containerd-7c13773b5532a2146362d30abf8bdedad3c3a46abd1ea38b62f480df69fb419f.scope - libcontainer container 7c13773b5532a2146362d30abf8bdedad3c3a46abd1ea38b62f480df69fb419f. Sep 4 17:19:52.386924 containerd[1457]: time="2024-09-04T17:19:52.386857293Z" level=info msg="StartContainer for \"7dc922adc7a4fcd0789da4d31e26465233e57fe9b3a08701efe904adefa93619\" returns successfully" Sep 4 17:19:52.506115 containerd[1457]: time="2024-09-04T17:19:52.506055416Z" level=info msg="StartContainer for \"7c13773b5532a2146362d30abf8bdedad3c3a46abd1ea38b62f480df69fb419f\" returns successfully" Sep 4 17:19:52.506237 containerd[1457]: time="2024-09-04T17:19:52.506134179Z" level=info msg="StartContainer for \"f9219ba69f3f7a5f57964a570436d34d095c38b057bc6e062651d0d544cfc817\" returns successfully" Sep 4 17:19:52.894971 kubelet[2156]: E0904 17:19:52.894934 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:52.898837 kubelet[2156]: E0904 17:19:52.898750 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:52.900888 kubelet[2156]: E0904 17:19:52.900858 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:52.971426 kubelet[2156]: I0904 17:19:52.971381 2156 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:53.337408 kubelet[2156]: E0904 17:19:53.336933 2156 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:19:53.383302 kubelet[2156]: I0904 17:19:53.383216 2156 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:19:53.855435 kubelet[2156]: I0904 17:19:53.855332 2156 apiserver.go:52] "Watching apiserver" Sep 4 17:19:53.864235 kubelet[2156]: I0904 17:19:53.864173 2156 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:19:53.905294 kubelet[2156]: E0904 17:19:53.905257 2156 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:53.905848 kubelet[2156]: E0904 17:19:53.905709 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:54.805253 kubelet[2156]: E0904 17:19:54.805165 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:54.901119 kubelet[2156]: E0904 17:19:54.901086 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:55.827622 kubelet[2156]: E0904 17:19:55.827574 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:55.903585 kubelet[2156]: E0904 17:19:55.903530 2156 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:56.886193 systemd[1]: Reloading requested from client PID 2437 ('systemctl') (unit session-7.scope)... Sep 4 17:19:56.886212 systemd[1]: Reloading... Sep 4 17:19:56.957979 zram_generator::config[2477]: No configuration found. Sep 4 17:19:57.466558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:19:57.568042 systemd[1]: Reloading finished in 681 ms. Sep 4 17:19:57.611249 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:57.633206 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:19:57.633511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:57.633564 systemd[1]: kubelet.service: Consumed 1.190s CPU time, 115.4M memory peak, 0B memory swap peak. Sep 4 17:19:57.644052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:19:57.786409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:19:57.791272 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:19:57.850417 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:57.850417 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:19:57.850417 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:19:57.850921 kubelet[2519]: I0904 17:19:57.850448 2519 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:19:57.854924 kubelet[2519]: I0904 17:19:57.854889 2519 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:19:57.854924 kubelet[2519]: I0904 17:19:57.854917 2519 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:19:57.855078 kubelet[2519]: I0904 17:19:57.855066 2519 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:19:57.856348 kubelet[2519]: I0904 17:19:57.856326 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:19:57.857182 kubelet[2519]: I0904 17:19:57.857157 2519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:19:57.866671 kubelet[2519]: I0904 17:19:57.866612 2519 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:19:57.866872 kubelet[2519]: I0904 17:19:57.866853 2519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:19:57.867049 kubelet[2519]: I0904 17:19:57.867010 2519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:19:57.867049 kubelet[2519]: I0904 17:19:57.867039 2519 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:19:57.867049 kubelet[2519]: I0904 17:19:57.867051 2519 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:19:57.867212 kubelet[2519]: I0904 17:19:57.867112 2519 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:57.867212 kubelet[2519]: I0904 17:19:57.867211 2519 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:19:57.867278 kubelet[2519]: I0904 17:19:57.867222 2519 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:19:57.867278 kubelet[2519]: I0904 17:19:57.867252 2519 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:19:57.867278 kubelet[2519]: I0904 17:19:57.867267 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:19:57.868299 kubelet[2519]: I0904 17:19:57.868281 2519 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:19:57.870848 kubelet[2519]: I0904 17:19:57.868914 2519 server.go:1232] "Started kubelet" Sep 4 17:19:57.870848 kubelet[2519]: I0904 17:19:57.869861 2519 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:19:57.870848 kubelet[2519]: I0904 17:19:57.870694 2519 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:19:57.870988 kubelet[2519]: I0904 17:19:57.870906 2519 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:19:57.871051 kubelet[2519]: I0904 17:19:57.871020 2519 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:19:57.871223 kubelet[2519]: E0904 17:19:57.871196 2519 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:19:57.871310 kubelet[2519]: E0904 17:19:57.871289 2519 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:19:57.872688 kubelet[2519]: I0904 17:19:57.872664 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:19:57.877293 kubelet[2519]: I0904 17:19:57.877264 2519 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:19:57.878912 kubelet[2519]: I0904 17:19:57.878894 2519 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:19:57.881275 kubelet[2519]: I0904 17:19:57.881203 2519 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:19:57.892806 kubelet[2519]: I0904 17:19:57.892778 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:19:57.895911 kubelet[2519]: I0904 17:19:57.895890 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:19:57.895911 kubelet[2519]: I0904 17:19:57.895910 2519 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:19:57.896033 kubelet[2519]: I0904 17:19:57.895944 2519 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:19:57.896033 kubelet[2519]: E0904 17:19:57.895986 2519 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:19:57.943865 kubelet[2519]: I0904 17:19:57.943836 2519 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:19:57.943865 kubelet[2519]: I0904 17:19:57.943858 2519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:19:57.943865 kubelet[2519]: I0904 17:19:57.943874 2519 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:19:57.944047 kubelet[2519]: I0904 17:19:57.944040 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:19:57.944079 kubelet[2519]: I0904 17:19:57.944061 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:19:57.944079 kubelet[2519]: I0904 17:19:57.944068 2519 policy_none.go:49] "None policy: Start" Sep 4 17:19:57.944558 kubelet[2519]: I0904 17:19:57.944540 2519 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:19:57.944612 kubelet[2519]: I0904 17:19:57.944563 2519 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:19:57.944782 kubelet[2519]: I0904 17:19:57.944740 2519 state_mem.go:75] "Updated machine memory state" Sep 4 17:19:57.948939 kubelet[2519]: I0904 17:19:57.948834 2519 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:19:57.949159 kubelet[2519]: I0904 17:19:57.949134 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:19:57.983188 kubelet[2519]: I0904 17:19:57.983153 2519 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:19:57.988671 kubelet[2519]: I0904 17:19:57.988639 2519 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Sep 4 17:19:57.988835 kubelet[2519]: I0904 17:19:57.988734 2519 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:19:57.996143 kubelet[2519]: I0904 17:19:57.996108 2519 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:19:57.996261 kubelet[2519]: I0904 17:19:57.996194 2519 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:19:57.996261 kubelet[2519]: I0904 17:19:57.996231 2519 topology_manager.go:215] "Topology Admit Handler" podUID="c825f0bf657b36fc191df194d537c68e" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:19:58.008836 kubelet[2519]: E0904 17:19:58.007385 2519 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:58.008836 kubelet[2519]: E0904 17:19:58.007497 2519 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:58.082038 kubelet[2519]: I0904 17:19:58.081928 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:58.082038 kubelet[2519]: I0904 17:19:58.081965 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:58.082038 kubelet[2519]: I0904 17:19:58.081986 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c825f0bf657b36fc191df194d537c68e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c825f0bf657b36fc191df194d537c68e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:58.082038 kubelet[2519]: I0904 17:19:58.082004 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:58.082038 kubelet[2519]: I0904 17:19:58.082025 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:58.082308 kubelet[2519]: I0904 17:19:58.082041 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:19:58.082308 kubelet[2519]: I0904 17:19:58.082067 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c825f0bf657b36fc191df194d537c68e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c825f0bf657b36fc191df194d537c68e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:58.082308 kubelet[2519]: I0904 17:19:58.082085 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c825f0bf657b36fc191df194d537c68e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c825f0bf657b36fc191df194d537c68e\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:58.082308 kubelet[2519]: I0904 17:19:58.082101 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:19:58.301806 kubelet[2519]: E0904 17:19:58.301775 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:58.308063 kubelet[2519]: E0904 17:19:58.307991 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:58.308063 kubelet[2519]: E0904 17:19:58.308056 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:58.868294 kubelet[2519]: I0904 17:19:58.868257 2519 apiserver.go:52] "Watching apiserver" Sep 4 17:19:58.879697 kubelet[2519]: I0904 17:19:58.879658 2519 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:19:58.904753 kubelet[2519]: E0904 17:19:58.904712 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:58.905802 kubelet[2519]: E0904 17:19:58.905567 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:59.016837 kubelet[2519]: E0904 17:19:59.015411 2519 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:19:59.016837 kubelet[2519]: E0904 17:19:59.015908 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:19:59.025739 kubelet[2519]: I0904 17:19:59.025384 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.025327578 podCreationTimestamp="2024-09-04 17:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:59.024020178 +0000 UTC m=+1.228394060" watchObservedRunningTime="2024-09-04 17:19:59.025327578 +0000 UTC m=+1.229701460" Sep 4 17:19:59.025739 kubelet[2519]: I0904 17:19:59.025497 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.025476599 podCreationTimestamp="2024-09-04 17:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:59.017972262 +0000 UTC m=+1.222346144" watchObservedRunningTime="2024-09-04 17:19:59.025476599 +0000 UTC m=+1.229850481" Sep 4 17:19:59.060739 kubelet[2519]: I0904 17:19:59.060694 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.060649334 podCreationTimestamp="2024-09-04 17:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:59.060564343 +0000 UTC m=+1.264938225" watchObservedRunningTime="2024-09-04 17:19:59.060649334 +0000 UTC m=+1.265023216" Sep 4 17:19:59.906855 kubelet[2519]: E0904 17:19:59.906800 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:02.914584 sudo[1639]: pam_unix(sudo:session): session closed for user root Sep 4 17:20:03.376737 sshd[1635]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:03.381229 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:33290.service: Deactivated successfully. Sep 4 17:20:03.383154 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:20:03.383382 systemd[1]: session-7.scope: Consumed 5.027s CPU time, 138.3M memory peak, 0B memory swap peak. Sep 4 17:20:03.383840 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:20:03.384661 systemd-logind[1440]: Removed session 7. Sep 4 17:20:03.945716 kubelet[2519]: E0904 17:20:03.945659 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:04.913635 kubelet[2519]: E0904 17:20:04.913603 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:05.810255 update_engine[1445]: I0904 17:20:05.810194 1445 update_attempter.cc:509] Updating boot flags... Sep 4 17:20:05.844618 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2615) Sep 4 17:20:05.875846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2613) Sep 4 17:20:07.246131 kubelet[2519]: E0904 17:20:07.246080 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:07.918395 kubelet[2519]: E0904 17:20:07.918330 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:08.720941 kubelet[2519]: E0904 17:20:08.720899 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:09.093701 kubelet[2519]: I0904 17:20:09.093656 2519 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:20:09.094245 containerd[1457]: time="2024-09-04T17:20:09.094203880Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:20:09.094719 kubelet[2519]: I0904 17:20:09.094340 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:20:09.974747 kubelet[2519]: I0904 17:20:09.974704 2519 topology_manager.go:215] "Topology Admit Handler" podUID="b15e81fc-fde4-4aaf-a42d-98fbdfa22976" podNamespace="kube-system" podName="kube-proxy-hm9wc" Sep 4 17:20:09.988233 systemd[1]: Created slice kubepods-besteffort-podb15e81fc_fde4_4aaf_a42d_98fbdfa22976.slice - libcontainer container kubepods-besteffort-podb15e81fc_fde4_4aaf_a42d_98fbdfa22976.slice. Sep 4 17:20:10.057846 kubelet[2519]: I0904 17:20:10.057758 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b15e81fc-fde4-4aaf-a42d-98fbdfa22976-lib-modules\") pod \"kube-proxy-hm9wc\" (UID: \"b15e81fc-fde4-4aaf-a42d-98fbdfa22976\") " pod="kube-system/kube-proxy-hm9wc" Sep 4 17:20:10.057846 kubelet[2519]: I0904 17:20:10.057839 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b15e81fc-fde4-4aaf-a42d-98fbdfa22976-kube-proxy\") pod \"kube-proxy-hm9wc\" (UID: \"b15e81fc-fde4-4aaf-a42d-98fbdfa22976\") " pod="kube-system/kube-proxy-hm9wc" Sep 4 17:20:10.058067 kubelet[2519]: I0904 17:20:10.057884 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b15e81fc-fde4-4aaf-a42d-98fbdfa22976-xtables-lock\") pod \"kube-proxy-hm9wc\" (UID: \"b15e81fc-fde4-4aaf-a42d-98fbdfa22976\") " pod="kube-system/kube-proxy-hm9wc" Sep 4 17:20:10.058067 kubelet[2519]: I0904 17:20:10.057940 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsjrj\" (UniqueName: \"kubernetes.io/projected/b15e81fc-fde4-4aaf-a42d-98fbdfa22976-kube-api-access-lsjrj\") pod \"kube-proxy-hm9wc\" (UID: \"b15e81fc-fde4-4aaf-a42d-98fbdfa22976\") " pod="kube-system/kube-proxy-hm9wc" Sep 4 17:20:10.102289 kubelet[2519]: I0904 17:20:10.102216 2519 topology_manager.go:215] "Topology Admit Handler" podUID="904d064e-d121-431d-9baf-044a5171b07d" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-dwskw" Sep 4 17:20:10.115218 systemd[1]: Created slice kubepods-besteffort-pod904d064e_d121_431d_9baf_044a5171b07d.slice - libcontainer container kubepods-besteffort-pod904d064e_d121_431d_9baf_044a5171b07d.slice. Sep 4 17:20:10.158885 kubelet[2519]: I0904 17:20:10.158799 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmxw\" (UniqueName: \"kubernetes.io/projected/904d064e-d121-431d-9baf-044a5171b07d-kube-api-access-cpmxw\") pod \"tigera-operator-5d56685c77-dwskw\" (UID: \"904d064e-d121-431d-9baf-044a5171b07d\") " pod="tigera-operator/tigera-operator-5d56685c77-dwskw" Sep 4 17:20:10.159043 kubelet[2519]: I0904 17:20:10.158936 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/904d064e-d121-431d-9baf-044a5171b07d-var-lib-calico\") pod \"tigera-operator-5d56685c77-dwskw\" (UID: \"904d064e-d121-431d-9baf-044a5171b07d\") " pod="tigera-operator/tigera-operator-5d56685c77-dwskw" Sep 4 17:20:10.298154 kubelet[2519]: E0904 17:20:10.298027 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:10.298725 containerd[1457]: time="2024-09-04T17:20:10.298690436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hm9wc,Uid:b15e81fc-fde4-4aaf-a42d-98fbdfa22976,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:10.326503 containerd[1457]: time="2024-09-04T17:20:10.326274438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:10.326503 containerd[1457]: time="2024-09-04T17:20:10.326324336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:10.326503 containerd[1457]: time="2024-09-04T17:20:10.326339530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:10.326503 containerd[1457]: time="2024-09-04T17:20:10.326349450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:10.355005 systemd[1]: Started cri-containerd-edf62336fa8d779c003b02c46b7dabe9e87825a72e4e9295354c473e938f1bad.scope - libcontainer container edf62336fa8d779c003b02c46b7dabe9e87825a72e4e9295354c473e938f1bad. Sep 4 17:20:10.376982 containerd[1457]: time="2024-09-04T17:20:10.376942527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hm9wc,Uid:b15e81fc-fde4-4aaf-a42d-98fbdfa22976,Namespace:kube-system,Attempt:0,} returns sandbox id \"edf62336fa8d779c003b02c46b7dabe9e87825a72e4e9295354c473e938f1bad\"" Sep 4 17:20:10.377769 kubelet[2519]: E0904 17:20:10.377738 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:10.379768 containerd[1457]: time="2024-09-04T17:20:10.379705300Z" level=info msg="CreateContainer within sandbox \"edf62336fa8d779c003b02c46b7dabe9e87825a72e4e9295354c473e938f1bad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:20:10.397797 containerd[1457]: time="2024-09-04T17:20:10.397654243Z" level=info msg="CreateContainer within sandbox \"edf62336fa8d779c003b02c46b7dabe9e87825a72e4e9295354c473e938f1bad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c9d743a76f89b9af031eecd44b76a1e4d5e01569e775fe7101090ec5f077379\"" Sep 4 17:20:10.398387 containerd[1457]: time="2024-09-04T17:20:10.398343713Z" level=info msg="StartContainer for \"9c9d743a76f89b9af031eecd44b76a1e4d5e01569e775fe7101090ec5f077379\"" Sep 4 17:20:10.419489 containerd[1457]: time="2024-09-04T17:20:10.419440186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-dwskw,Uid:904d064e-d121-431d-9baf-044a5171b07d,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:20:10.430011 systemd[1]: Started cri-containerd-9c9d743a76f89b9af031eecd44b76a1e4d5e01569e775fe7101090ec5f077379.scope - libcontainer container 9c9d743a76f89b9af031eecd44b76a1e4d5e01569e775fe7101090ec5f077379. Sep 4 17:20:10.444145 containerd[1457]: time="2024-09-04T17:20:10.443908587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:10.444145 containerd[1457]: time="2024-09-04T17:20:10.443973819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:10.444145 containerd[1457]: time="2024-09-04T17:20:10.443993071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:10.444145 containerd[1457]: time="2024-09-04T17:20:10.444005538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:10.463977 systemd[1]: Started cri-containerd-0c553b27b6c3fb28af035beba787390df634e66cabbd42d9e065b5c47c10cb51.scope - libcontainer container 0c553b27b6c3fb28af035beba787390df634e66cabbd42d9e065b5c47c10cb51. Sep 4 17:20:10.468719 containerd[1457]: time="2024-09-04T17:20:10.468689127Z" level=info msg="StartContainer for \"9c9d743a76f89b9af031eecd44b76a1e4d5e01569e775fe7101090ec5f077379\" returns successfully" Sep 4 17:20:10.506996 containerd[1457]: time="2024-09-04T17:20:10.506884212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-dwskw,Uid:904d064e-d121-431d-9baf-044a5171b07d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0c553b27b6c3fb28af035beba787390df634e66cabbd42d9e065b5c47c10cb51\"" Sep 4 17:20:10.508458 containerd[1457]: time="2024-09-04T17:20:10.508408339Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:20:10.924145 kubelet[2519]: E0904 17:20:10.924102 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:11.742272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304239904.mount: Deactivated successfully. Sep 4 17:20:12.135534 containerd[1457]: time="2024-09-04T17:20:12.135467575Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:12.136717 containerd[1457]: time="2024-09-04T17:20:12.136669368Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136497" Sep 4 17:20:12.137924 containerd[1457]: time="2024-09-04T17:20:12.137888447Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:12.141136 containerd[1457]: time="2024-09-04T17:20:12.141088154Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:12.142014 containerd[1457]: time="2024-09-04T17:20:12.141962444Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.633522547s" Sep 4 17:20:12.142014 containerd[1457]: time="2024-09-04T17:20:12.141997470Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Sep 4 17:20:12.143808 containerd[1457]: time="2024-09-04T17:20:12.143773771Z" level=info msg="CreateContainer within sandbox \"0c553b27b6c3fb28af035beba787390df634e66cabbd42d9e065b5c47c10cb51\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:20:12.158239 containerd[1457]: time="2024-09-04T17:20:12.158182536Z" level=info msg="CreateContainer within sandbox \"0c553b27b6c3fb28af035beba787390df634e66cabbd42d9e065b5c47c10cb51\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"602e255e24629444c013242a7b0f1ec180432cdd50d959d744a5a3f9d1aadde8\"" Sep 4 17:20:12.158789 containerd[1457]: time="2024-09-04T17:20:12.158752656Z" level=info msg="StartContainer for \"602e255e24629444c013242a7b0f1ec180432cdd50d959d744a5a3f9d1aadde8\"" Sep 4 17:20:12.193068 systemd[1]: Started cri-containerd-602e255e24629444c013242a7b0f1ec180432cdd50d959d744a5a3f9d1aadde8.scope - libcontainer container 602e255e24629444c013242a7b0f1ec180432cdd50d959d744a5a3f9d1aadde8. Sep 4 17:20:12.226105 containerd[1457]: time="2024-09-04T17:20:12.226045854Z" level=info msg="StartContainer for \"602e255e24629444c013242a7b0f1ec180432cdd50d959d744a5a3f9d1aadde8\" returns successfully" Sep 4 17:20:12.939233 kubelet[2519]: I0904 17:20:12.939185 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hm9wc" podStartSLOduration=3.939144525 podCreationTimestamp="2024-09-04 17:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:10.931456466 +0000 UTC m=+13.135830348" watchObservedRunningTime="2024-09-04 17:20:12.939144525 +0000 UTC m=+15.143518407" Sep 4 17:20:15.138348 kubelet[2519]: I0904 17:20:15.136737 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-dwskw" podStartSLOduration=3.502392111 podCreationTimestamp="2024-09-04 17:20:10 +0000 UTC" firstStartedPulling="2024-09-04 17:20:10.508010082 +0000 UTC m=+12.712383964" lastFinishedPulling="2024-09-04 17:20:12.142316593 +0000 UTC m=+14.346690475" observedRunningTime="2024-09-04 17:20:12.939125805 +0000 UTC m=+15.143499687" watchObservedRunningTime="2024-09-04 17:20:15.136698622 +0000 UTC m=+17.341072504" Sep 4 17:20:15.138348 kubelet[2519]: I0904 17:20:15.136841 2519 topology_manager.go:215] "Topology Admit Handler" podUID="b99133fe-bc0e-487c-bcb4-f43b9da2a8ae" podNamespace="calico-system" podName="calico-typha-dcfcbd67d-vsjcf" Sep 4 17:20:15.152662 systemd[1]: Created slice kubepods-besteffort-podb99133fe_bc0e_487c_bcb4_f43b9da2a8ae.slice - libcontainer container kubepods-besteffort-podb99133fe_bc0e_487c_bcb4_f43b9da2a8ae.slice. Sep 4 17:20:15.185221 kubelet[2519]: I0904 17:20:15.184622 2519 topology_manager.go:215] "Topology Admit Handler" podUID="49fd1479-087b-4784-86b5-1eed6de0412d" podNamespace="calico-system" podName="calico-node-bk2nk" Sep 4 17:20:15.190980 kubelet[2519]: I0904 17:20:15.190748 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b99133fe-bc0e-487c-bcb4-f43b9da2a8ae-typha-certs\") pod \"calico-typha-dcfcbd67d-vsjcf\" (UID: \"b99133fe-bc0e-487c-bcb4-f43b9da2a8ae\") " pod="calico-system/calico-typha-dcfcbd67d-vsjcf" Sep 4 17:20:15.190980 kubelet[2519]: I0904 17:20:15.190785 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4znm\" (UniqueName: \"kubernetes.io/projected/b99133fe-bc0e-487c-bcb4-f43b9da2a8ae-kube-api-access-b4znm\") pod \"calico-typha-dcfcbd67d-vsjcf\" (UID: \"b99133fe-bc0e-487c-bcb4-f43b9da2a8ae\") " pod="calico-system/calico-typha-dcfcbd67d-vsjcf" Sep 4 17:20:15.190980 kubelet[2519]: I0904 17:20:15.190833 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b99133fe-bc0e-487c-bcb4-f43b9da2a8ae-tigera-ca-bundle\") pod \"calico-typha-dcfcbd67d-vsjcf\" (UID: \"b99133fe-bc0e-487c-bcb4-f43b9da2a8ae\") " pod="calico-system/calico-typha-dcfcbd67d-vsjcf" Sep 4 17:20:15.192143 systemd[1]: Created slice kubepods-besteffort-pod49fd1479_087b_4784_86b5_1eed6de0412d.slice - libcontainer container kubepods-besteffort-pod49fd1479_087b_4784_86b5_1eed6de0412d.slice. Sep 4 17:20:15.291279 kubelet[2519]: I0904 17:20:15.291184 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-var-run-calico\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291544 kubelet[2519]: I0904 17:20:15.291332 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-xtables-lock\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291544 kubelet[2519]: I0904 17:20:15.291364 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49fd1479-087b-4784-86b5-1eed6de0412d-tigera-ca-bundle\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291544 kubelet[2519]: I0904 17:20:15.291392 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49fd1479-087b-4784-86b5-1eed6de0412d-node-certs\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291544 kubelet[2519]: I0904 17:20:15.291451 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-var-lib-calico\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291544 kubelet[2519]: I0904 17:20:15.291481 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-cni-bin-dir\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291963 kubelet[2519]: I0904 17:20:15.291525 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-lib-modules\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291963 kubelet[2519]: I0904 17:20:15.291652 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-policysync\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291963 kubelet[2519]: I0904 17:20:15.291719 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4mj\" (UniqueName: \"kubernetes.io/projected/49fd1479-087b-4784-86b5-1eed6de0412d-kube-api-access-zk4mj\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291963 kubelet[2519]: I0904 17:20:15.291761 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-cni-net-dir\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.291963 kubelet[2519]: I0904 17:20:15.291796 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-flexvol-driver-host\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.292391 kubelet[2519]: I0904 17:20:15.291916 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49fd1479-087b-4784-86b5-1eed6de0412d-cni-log-dir\") pod \"calico-node-bk2nk\" (UID: \"49fd1479-087b-4784-86b5-1eed6de0412d\") " pod="calico-system/calico-node-bk2nk" Sep 4 17:20:15.310228 kubelet[2519]: I0904 17:20:15.309510 2519 topology_manager.go:215] "Topology Admit Handler" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" podNamespace="calico-system" podName="csi-node-driver-s949d" Sep 4 17:20:15.310228 kubelet[2519]: E0904 17:20:15.309850 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:15.392549 kubelet[2519]: I0904 17:20:15.392402 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d2cfa41a-8321-4973-acd5-1a4593214e59-socket-dir\") pod \"csi-node-driver-s949d\" (UID: \"d2cfa41a-8321-4973-acd5-1a4593214e59\") " pod="calico-system/csi-node-driver-s949d" Sep 4 17:20:15.392549 kubelet[2519]: I0904 17:20:15.392540 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwxg\" (UniqueName: \"kubernetes.io/projected/d2cfa41a-8321-4973-acd5-1a4593214e59-kube-api-access-pfwxg\") pod \"csi-node-driver-s949d\" (UID: \"d2cfa41a-8321-4973-acd5-1a4593214e59\") " pod="calico-system/csi-node-driver-s949d" Sep 4 17:20:15.392741 kubelet[2519]: I0904 17:20:15.392604 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d2cfa41a-8321-4973-acd5-1a4593214e59-kubelet-dir\") pod \"csi-node-driver-s949d\" (UID: \"d2cfa41a-8321-4973-acd5-1a4593214e59\") " pod="calico-system/csi-node-driver-s949d" Sep 4 17:20:15.392741 kubelet[2519]: I0904 17:20:15.392654 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d2cfa41a-8321-4973-acd5-1a4593214e59-varrun\") pod \"csi-node-driver-s949d\" (UID: \"d2cfa41a-8321-4973-acd5-1a4593214e59\") " pod="calico-system/csi-node-driver-s949d" Sep 4 17:20:15.392741 kubelet[2519]: I0904 17:20:15.392678 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d2cfa41a-8321-4973-acd5-1a4593214e59-registration-dir\") pod \"csi-node-driver-s949d\" (UID: \"d2cfa41a-8321-4973-acd5-1a4593214e59\") " pod="calico-system/csi-node-driver-s949d" Sep 4 17:20:15.403364 kubelet[2519]: E0904 17:20:15.403326 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.403480 kubelet[2519]: W0904 17:20:15.403353 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.403480 kubelet[2519]: E0904 17:20:15.403413 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.406295 kubelet[2519]: E0904 17:20:15.406274 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.406295 kubelet[2519]: W0904 17:20:15.406292 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.406400 kubelet[2519]: E0904 17:20:15.406314 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.412005 kubelet[2519]: E0904 17:20:15.411970 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.412005 kubelet[2519]: W0904 17:20:15.411996 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.412155 kubelet[2519]: E0904 17:20:15.412019 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.460048 kubelet[2519]: E0904 17:20:15.460001 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:15.460995 containerd[1457]: time="2024-09-04T17:20:15.460550416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dcfcbd67d-vsjcf,Uid:b99133fe-bc0e-487c-bcb4-f43b9da2a8ae,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:15.493161 kubelet[2519]: E0904 17:20:15.493134 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.493161 kubelet[2519]: W0904 17:20:15.493153 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.493161 kubelet[2519]: E0904 17:20:15.493175 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.493386 kubelet[2519]: E0904 17:20:15.493374 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.493386 kubelet[2519]: W0904 17:20:15.493382 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.493474 kubelet[2519]: E0904 17:20:15.493395 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.493628 kubelet[2519]: E0904 17:20:15.493604 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.493628 kubelet[2519]: W0904 17:20:15.493619 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.493709 kubelet[2519]: E0904 17:20:15.493643 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.493859 kubelet[2519]: E0904 17:20:15.493844 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.493859 kubelet[2519]: W0904 17:20:15.493855 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.493964 kubelet[2519]: E0904 17:20:15.493875 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.494094 kubelet[2519]: E0904 17:20:15.494080 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.494094 kubelet[2519]: W0904 17:20:15.494091 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.494166 kubelet[2519]: E0904 17:20:15.494109 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.494295 kubelet[2519]: E0904 17:20:15.494281 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.494295 kubelet[2519]: W0904 17:20:15.494290 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.494374 kubelet[2519]: E0904 17:20:15.494304 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.494497 kubelet[2519]: E0904 17:20:15.494483 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.494497 kubelet[2519]: W0904 17:20:15.494494 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.494577 kubelet[2519]: E0904 17:20:15.494511 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.494703 kubelet[2519]: E0904 17:20:15.494690 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.494703 kubelet[2519]: W0904 17:20:15.494700 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.494779 kubelet[2519]: E0904 17:20:15.494716 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.494915 kubelet[2519]: E0904 17:20:15.494902 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.494915 kubelet[2519]: W0904 17:20:15.494912 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.494994 kubelet[2519]: E0904 17:20:15.494929 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.495144 kubelet[2519]: E0904 17:20:15.495131 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.495144 kubelet[2519]: W0904 17:20:15.495142 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.495306 kubelet[2519]: E0904 17:20:15.495158 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.495366 kubelet[2519]: E0904 17:20:15.495351 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.495366 kubelet[2519]: W0904 17:20:15.495361 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.495437 kubelet[2519]: E0904 17:20:15.495378 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.495571 kubelet[2519]: E0904 17:20:15.495558 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.495571 kubelet[2519]: W0904 17:20:15.495568 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.495669 kubelet[2519]: E0904 17:20:15.495585 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.495842 kubelet[2519]: E0904 17:20:15.495828 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.495842 kubelet[2519]: W0904 17:20:15.495839 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.495928 kubelet[2519]: E0904 17:20:15.495867 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.496031 kubelet[2519]: E0904 17:20:15.496018 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.496031 kubelet[2519]: W0904 17:20:15.496028 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.496111 kubelet[2519]: E0904 17:20:15.496059 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.496221 kubelet[2519]: E0904 17:20:15.496208 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.496221 kubelet[2519]: W0904 17:20:15.496218 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.496294 kubelet[2519]: E0904 17:20:15.496235 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.496415 kubelet[2519]: E0904 17:20:15.496402 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.496415 kubelet[2519]: W0904 17:20:15.496412 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.496499 kubelet[2519]: E0904 17:20:15.496428 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.496618 kubelet[2519]: E0904 17:20:15.496591 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.496618 kubelet[2519]: W0904 17:20:15.496614 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.496695 kubelet[2519]: E0904 17:20:15.496631 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.496934 kubelet[2519]: E0904 17:20:15.496919 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.496934 kubelet[2519]: W0904 17:20:15.496932 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.497020 kubelet[2519]: E0904 17:20:15.496950 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.497148 kubelet[2519]: E0904 17:20:15.497125 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.497148 kubelet[2519]: W0904 17:20:15.497134 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.497148 kubelet[2519]: E0904 17:20:15.497146 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.497339 kubelet[2519]: E0904 17:20:15.497323 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.497339 kubelet[2519]: W0904 17:20:15.497332 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.497411 kubelet[2519]: E0904 17:20:15.497351 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.497525 kubelet[2519]: E0904 17:20:15.497509 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.497525 kubelet[2519]: W0904 17:20:15.497517 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.497588 kubelet[2519]: E0904 17:20:15.497530 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.497733 kubelet[2519]: E0904 17:20:15.497717 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.497733 kubelet[2519]: W0904 17:20:15.497725 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.497836 kubelet[2519]: E0904 17:20:15.497740 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.497972 kubelet[2519]: E0904 17:20:15.497956 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.497972 kubelet[2519]: W0904 17:20:15.497970 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.498043 kubelet[2519]: E0904 17:20:15.497989 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.498271 kubelet[2519]: E0904 17:20:15.498255 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.498271 kubelet[2519]: W0904 17:20:15.498265 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.498336 kubelet[2519]: E0904 17:20:15.498277 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.514756 kubelet[2519]: E0904 17:20:15.514723 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.514756 kubelet[2519]: W0904 17:20:15.514745 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.514756 kubelet[2519]: E0904 17:20:15.514768 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.515363 kubelet[2519]: E0904 17:20:15.515261 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:15.516725 containerd[1457]: time="2024-09-04T17:20:15.515766499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bk2nk,Uid:49fd1479-087b-4784-86b5-1eed6de0412d,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:15.523731 kubelet[2519]: E0904 17:20:15.523676 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:20:15.524240 kubelet[2519]: W0904 17:20:15.523834 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:20:15.524240 kubelet[2519]: E0904 17:20:15.523862 2519 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:20:15.551436 containerd[1457]: time="2024-09-04T17:20:15.551316366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:15.554703 containerd[1457]: time="2024-09-04T17:20:15.551796492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:15.554703 containerd[1457]: time="2024-09-04T17:20:15.551936614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:15.554703 containerd[1457]: time="2024-09-04T17:20:15.552037175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:15.554703 containerd[1457]: time="2024-09-04T17:20:15.552148940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:15.554703 containerd[1457]: time="2024-09-04T17:20:15.552220138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:15.554703 containerd[1457]: time="2024-09-04T17:20:15.552252696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:15.554703 containerd[1457]: time="2024-09-04T17:20:15.552275334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:15.583056 systemd[1]: Started cri-containerd-33c8f595423b7baba5708ec67bcb6d2041c10a2c5d2cde19cd50a471fcc486ef.scope - libcontainer container 33c8f595423b7baba5708ec67bcb6d2041c10a2c5d2cde19cd50a471fcc486ef. Sep 4 17:20:15.585251 systemd[1]: Started cri-containerd-6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c.scope - libcontainer container 6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c. Sep 4 17:20:15.617219 containerd[1457]: time="2024-09-04T17:20:15.617106957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bk2nk,Uid:49fd1479-087b-4784-86b5-1eed6de0412d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c\"" Sep 4 17:20:15.618263 kubelet[2519]: E0904 17:20:15.617936 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:15.620908 containerd[1457]: time="2024-09-04T17:20:15.620642179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:20:15.640466 containerd[1457]: time="2024-09-04T17:20:15.639875606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dcfcbd67d-vsjcf,Uid:b99133fe-bc0e-487c-bcb4-f43b9da2a8ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"33c8f595423b7baba5708ec67bcb6d2041c10a2c5d2cde19cd50a471fcc486ef\"" Sep 4 17:20:15.640982 kubelet[2519]: E0904 17:20:15.640909 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:16.896472 kubelet[2519]: E0904 17:20:16.896419 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:17.836331 containerd[1457]: time="2024-09-04T17:20:17.836278508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:17.837035 containerd[1457]: time="2024-09-04T17:20:17.836979097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Sep 4 17:20:17.838143 containerd[1457]: time="2024-09-04T17:20:17.838114524Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:17.840183 containerd[1457]: time="2024-09-04T17:20:17.840154892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:17.840668 containerd[1457]: time="2024-09-04T17:20:17.840641829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 2.219964797s" Sep 4 17:20:17.840708 containerd[1457]: time="2024-09-04T17:20:17.840670989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Sep 4 17:20:17.841261 containerd[1457]: time="2024-09-04T17:20:17.841173918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:20:17.842313 containerd[1457]: time="2024-09-04T17:20:17.842289475Z" level=info msg="CreateContainer within sandbox \"6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:20:17.857016 containerd[1457]: time="2024-09-04T17:20:17.856967800Z" level=info msg="CreateContainer within sandbox \"6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f\"" Sep 4 17:20:17.857492 containerd[1457]: time="2024-09-04T17:20:17.857449956Z" level=info msg="StartContainer for \"9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f\"" Sep 4 17:20:17.892996 systemd[1]: Started cri-containerd-9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f.scope - libcontainer container 9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f. Sep 4 17:20:17.935909 containerd[1457]: time="2024-09-04T17:20:17.935854044Z" level=info msg="StartContainer for \"9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f\" returns successfully" Sep 4 17:20:17.949519 kubelet[2519]: E0904 17:20:17.949401 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:17.955793 systemd[1]: cri-containerd-9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f.scope: Deactivated successfully. Sep 4 17:20:17.984946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f-rootfs.mount: Deactivated successfully. Sep 4 17:20:18.036720 containerd[1457]: time="2024-09-04T17:20:18.036642020Z" level=info msg="shim disconnected" id=9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f namespace=k8s.io Sep 4 17:20:18.036720 containerd[1457]: time="2024-09-04T17:20:18.036700230Z" level=warning msg="cleaning up after shim disconnected" id=9092746946c10052679595eb9371c56e315ab0ecee8ea071c29a6b7d1950169f namespace=k8s.io Sep 4 17:20:18.036720 containerd[1457]: time="2024-09-04T17:20:18.036710902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:20:18.897135 kubelet[2519]: E0904 17:20:18.897080 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:18.953846 kubelet[2519]: E0904 17:20:18.953548 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:20.716377 containerd[1457]: time="2024-09-04T17:20:20.716297409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:20.717477 containerd[1457]: time="2024-09-04T17:20:20.717175064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Sep 4 17:20:20.718737 containerd[1457]: time="2024-09-04T17:20:20.718669052Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:20.722069 containerd[1457]: time="2024-09-04T17:20:20.722009847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:20.722903 containerd[1457]: time="2024-09-04T17:20:20.722845225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.881584056s" Sep 4 17:20:20.722903 containerd[1457]: time="2024-09-04T17:20:20.722896139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Sep 4 17:20:20.723854 containerd[1457]: time="2024-09-04T17:20:20.723572575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:20:20.736904 containerd[1457]: time="2024-09-04T17:20:20.736271140Z" level=info msg="CreateContainer within sandbox \"33c8f595423b7baba5708ec67bcb6d2041c10a2c5d2cde19cd50a471fcc486ef\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:20:20.754836 containerd[1457]: time="2024-09-04T17:20:20.754772447Z" level=info msg="CreateContainer within sandbox \"33c8f595423b7baba5708ec67bcb6d2041c10a2c5d2cde19cd50a471fcc486ef\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0090c6ecbc689b2aa14c050e4d7d14791ce92727cc8257453293422cb4c83366\"" Sep 4 17:20:20.756094 containerd[1457]: time="2024-09-04T17:20:20.755379943Z" level=info msg="StartContainer for \"0090c6ecbc689b2aa14c050e4d7d14791ce92727cc8257453293422cb4c83366\"" Sep 4 17:20:20.799160 systemd[1]: Started cri-containerd-0090c6ecbc689b2aa14c050e4d7d14791ce92727cc8257453293422cb4c83366.scope - libcontainer container 0090c6ecbc689b2aa14c050e4d7d14791ce92727cc8257453293422cb4c83366. Sep 4 17:20:20.847175 containerd[1457]: time="2024-09-04T17:20:20.847112180Z" level=info msg="StartContainer for \"0090c6ecbc689b2aa14c050e4d7d14791ce92727cc8257453293422cb4c83366\" returns successfully" Sep 4 17:20:20.899174 kubelet[2519]: E0904 17:20:20.899117 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:20.959045 kubelet[2519]: E0904 17:20:20.958991 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:20.972723 kubelet[2519]: I0904 17:20:20.972409 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-dcfcbd67d-vsjcf" podStartSLOduration=0.890845291 podCreationTimestamp="2024-09-04 17:20:15 +0000 UTC" firstStartedPulling="2024-09-04 17:20:15.641869931 +0000 UTC m=+17.846243813" lastFinishedPulling="2024-09-04 17:20:20.723290761 +0000 UTC m=+22.927664653" observedRunningTime="2024-09-04 17:20:20.971437816 +0000 UTC m=+23.175811698" watchObservedRunningTime="2024-09-04 17:20:20.972266131 +0000 UTC m=+23.176640013" Sep 4 17:20:21.962529 kubelet[2519]: I0904 17:20:21.962473 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:20:21.963331 kubelet[2519]: E0904 17:20:21.963300 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:22.896628 kubelet[2519]: E0904 17:20:22.896581 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:24.897067 kubelet[2519]: E0904 17:20:24.897011 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:25.117358 containerd[1457]: time="2024-09-04T17:20:25.117298763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:25.118609 containerd[1457]: time="2024-09-04T17:20:25.118433131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Sep 4 17:20:25.120097 containerd[1457]: time="2024-09-04T17:20:25.120059863Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:25.123518 containerd[1457]: time="2024-09-04T17:20:25.123446124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:25.124254 containerd[1457]: time="2024-09-04T17:20:25.124221688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.400609955s" Sep 4 17:20:25.124290 containerd[1457]: time="2024-09-04T17:20:25.124253683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Sep 4 17:20:25.126152 containerd[1457]: time="2024-09-04T17:20:25.126123786Z" level=info msg="CreateContainer within sandbox \"6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:20:25.143226 containerd[1457]: time="2024-09-04T17:20:25.143167118Z" level=info msg="CreateContainer within sandbox \"6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30\"" Sep 4 17:20:25.143747 containerd[1457]: time="2024-09-04T17:20:25.143722718Z" level=info msg="StartContainer for \"b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30\"" Sep 4 17:20:25.179990 systemd[1]: Started cri-containerd-b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30.scope - libcontainer container b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30. Sep 4 17:20:25.363334 containerd[1457]: time="2024-09-04T17:20:25.363275591Z" level=info msg="StartContainer for \"b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30\" returns successfully" Sep 4 17:20:25.970366 kubelet[2519]: E0904 17:20:25.970314 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:26.708326 containerd[1457]: time="2024-09-04T17:20:26.708276692Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:20:26.711909 systemd[1]: cri-containerd-b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30.scope: Deactivated successfully. Sep 4 17:20:26.736281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30-rootfs.mount: Deactivated successfully. Sep 4 17:20:26.761848 kubelet[2519]: I0904 17:20:26.761376 2519 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:20:26.792393 kubelet[2519]: I0904 17:20:26.779968 2519 topology_manager.go:215] "Topology Admit Handler" podUID="3ddb733d-9464-44f3-b6e4-26ba87cd5114" podNamespace="kube-system" podName="coredns-5dd5756b68-wjf2b" Sep 4 17:20:26.792393 kubelet[2519]: I0904 17:20:26.787764 2519 topology_manager.go:215] "Topology Admit Handler" podUID="827502be-1bad-4761-be20-f8b4bc19f05e" podNamespace="calico-system" podName="calico-kube-controllers-74f559954f-b2z2l" Sep 4 17:20:26.792393 kubelet[2519]: I0904 17:20:26.790261 2519 topology_manager.go:215] "Topology Admit Handler" podUID="63225d84-91a9-409e-b445-bac344cc3e0c" podNamespace="kube-system" podName="coredns-5dd5756b68-bclg9" Sep 4 17:20:26.791392 systemd[1]: Created slice kubepods-burstable-pod3ddb733d_9464_44f3_b6e4_26ba87cd5114.slice - libcontainer container kubepods-burstable-pod3ddb733d_9464_44f3_b6e4_26ba87cd5114.slice. Sep 4 17:20:26.801535 systemd[1]: Created slice kubepods-besteffort-pod827502be_1bad_4761_be20_f8b4bc19f05e.slice - libcontainer container kubepods-besteffort-pod827502be_1bad_4761_be20_f8b4bc19f05e.slice. Sep 4 17:20:26.806458 systemd[1]: Created slice kubepods-burstable-pod63225d84_91a9_409e_b445_bac344cc3e0c.slice - libcontainer container kubepods-burstable-pod63225d84_91a9_409e_b445_bac344cc3e0c.slice. Sep 4 17:20:26.870596 kubelet[2519]: I0904 17:20:26.870540 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ddb733d-9464-44f3-b6e4-26ba87cd5114-config-volume\") pod \"coredns-5dd5756b68-wjf2b\" (UID: \"3ddb733d-9464-44f3-b6e4-26ba87cd5114\") " pod="kube-system/coredns-5dd5756b68-wjf2b" Sep 4 17:20:26.870596 kubelet[2519]: I0904 17:20:26.870591 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54s9m\" (UniqueName: \"kubernetes.io/projected/827502be-1bad-4761-be20-f8b4bc19f05e-kube-api-access-54s9m\") pod \"calico-kube-controllers-74f559954f-b2z2l\" (UID: \"827502be-1bad-4761-be20-f8b4bc19f05e\") " pod="calico-system/calico-kube-controllers-74f559954f-b2z2l" Sep 4 17:20:26.870596 kubelet[2519]: I0904 17:20:26.870613 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63225d84-91a9-409e-b445-bac344cc3e0c-config-volume\") pod \"coredns-5dd5756b68-bclg9\" (UID: \"63225d84-91a9-409e-b445-bac344cc3e0c\") " pod="kube-system/coredns-5dd5756b68-bclg9" Sep 4 17:20:26.870851 kubelet[2519]: I0904 17:20:26.870702 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/827502be-1bad-4761-be20-f8b4bc19f05e-tigera-ca-bundle\") pod \"calico-kube-controllers-74f559954f-b2z2l\" (UID: \"827502be-1bad-4761-be20-f8b4bc19f05e\") " pod="calico-system/calico-kube-controllers-74f559954f-b2z2l" Sep 4 17:20:26.870851 kubelet[2519]: I0904 17:20:26.870752 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprbs\" (UniqueName: \"kubernetes.io/projected/63225d84-91a9-409e-b445-bac344cc3e0c-kube-api-access-fprbs\") pod \"coredns-5dd5756b68-bclg9\" (UID: \"63225d84-91a9-409e-b445-bac344cc3e0c\") " pod="kube-system/coredns-5dd5756b68-bclg9" Sep 4 17:20:26.870851 kubelet[2519]: I0904 17:20:26.870800 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvdm5\" (UniqueName: \"kubernetes.io/projected/3ddb733d-9464-44f3-b6e4-26ba87cd5114-kube-api-access-wvdm5\") pod \"coredns-5dd5756b68-wjf2b\" (UID: \"3ddb733d-9464-44f3-b6e4-26ba87cd5114\") " pod="kube-system/coredns-5dd5756b68-wjf2b" Sep 4 17:20:26.902267 systemd[1]: Created slice kubepods-besteffort-podd2cfa41a_8321_4973_acd5_1a4593214e59.slice - libcontainer container kubepods-besteffort-podd2cfa41a_8321_4973_acd5_1a4593214e59.slice. Sep 4 17:20:26.974207 kubelet[2519]: E0904 17:20:26.974072 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:27.305659 containerd[1457]: time="2024-09-04T17:20:27.305173916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s949d,Uid:d2cfa41a-8321-4973-acd5-1a4593214e59,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:27.404449 containerd[1457]: time="2024-09-04T17:20:27.404399312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f559954f-b2z2l,Uid:827502be-1bad-4761-be20-f8b4bc19f05e,Namespace:calico-system,Attempt:0,}" Sep 4 17:20:27.409041 kubelet[2519]: E0904 17:20:27.409020 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:27.409580 containerd[1457]: time="2024-09-04T17:20:27.409520403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bclg9,Uid:63225d84-91a9-409e-b445-bac344cc3e0c,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:27.658090 containerd[1457]: time="2024-09-04T17:20:27.654743131Z" level=info msg="shim disconnected" id=b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30 namespace=k8s.io Sep 4 17:20:27.658090 containerd[1457]: time="2024-09-04T17:20:27.657906989Z" level=warning msg="cleaning up after shim disconnected" id=b46c52eab350f0c883a50dc4f5bb3486f6f700a5b64bbed7b0c42c6f0b5fef30 namespace=k8s.io Sep 4 17:20:27.658090 containerd[1457]: time="2024-09-04T17:20:27.657917150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:20:27.696244 kubelet[2519]: E0904 17:20:27.696191 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:27.696866 containerd[1457]: time="2024-09-04T17:20:27.696824709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wjf2b,Uid:3ddb733d-9464-44f3-b6e4-26ba87cd5114,Namespace:kube-system,Attempt:0,}" Sep 4 17:20:27.770317 containerd[1457]: time="2024-09-04T17:20:27.770251009Z" level=error msg="Failed to destroy network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.774466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850-shm.mount: Deactivated successfully. Sep 4 17:20:27.775206 containerd[1457]: time="2024-09-04T17:20:27.775172279Z" level=error msg="encountered an error cleaning up failed sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.775290 containerd[1457]: time="2024-09-04T17:20:27.775260165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s949d,Uid:d2cfa41a-8321-4973-acd5-1a4593214e59,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.776951 kubelet[2519]: E0904 17:20:27.776916 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.777015 kubelet[2519]: E0904 17:20:27.776989 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s949d" Sep 4 17:20:27.777015 kubelet[2519]: E0904 17:20:27.777010 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s949d" Sep 4 17:20:27.777113 kubelet[2519]: E0904 17:20:27.777093 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s949d_calico-system(d2cfa41a-8321-4973-acd5-1a4593214e59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s949d_calico-system(d2cfa41a-8321-4973-acd5-1a4593214e59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:27.782579 containerd[1457]: time="2024-09-04T17:20:27.782530376Z" level=error msg="Failed to destroy network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.785748 containerd[1457]: time="2024-09-04T17:20:27.785690446Z" level=error msg="encountered an error cleaning up failed sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.785835 containerd[1457]: time="2024-09-04T17:20:27.785782582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wjf2b,Uid:3ddb733d-9464-44f3-b6e4-26ba87cd5114,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.786431 kubelet[2519]: E0904 17:20:27.786061 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.786431 kubelet[2519]: E0904 17:20:27.786121 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wjf2b" Sep 4 17:20:27.786431 kubelet[2519]: E0904 17:20:27.786142 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-wjf2b" Sep 4 17:20:27.786539 kubelet[2519]: E0904 17:20:27.786191 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-wjf2b_kube-system(3ddb733d-9464-44f3-b6e4-26ba87cd5114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-wjf2b_kube-system(3ddb733d-9464-44f3-b6e4-26ba87cd5114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wjf2b" podUID="3ddb733d-9464-44f3-b6e4-26ba87cd5114" Sep 4 17:20:27.786782 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12-shm.mount: Deactivated successfully. Sep 4 17:20:27.789801 containerd[1457]: time="2024-09-04T17:20:27.789760907Z" level=error msg="Failed to destroy network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.790231 containerd[1457]: time="2024-09-04T17:20:27.790186973Z" level=error msg="encountered an error cleaning up failed sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.790279 containerd[1457]: time="2024-09-04T17:20:27.790249329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f559954f-b2z2l,Uid:827502be-1bad-4761-be20-f8b4bc19f05e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.790451 kubelet[2519]: E0904 17:20:27.790431 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.790491 kubelet[2519]: E0904 17:20:27.790472 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74f559954f-b2z2l" Sep 4 17:20:27.790519 kubelet[2519]: E0904 17:20:27.790493 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74f559954f-b2z2l" Sep 4 17:20:27.790549 kubelet[2519]: E0904 17:20:27.790533 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74f559954f-b2z2l_calico-system(827502be-1bad-4761-be20-f8b4bc19f05e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74f559954f-b2z2l_calico-system(827502be-1bad-4761-be20-f8b4bc19f05e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74f559954f-b2z2l" podUID="827502be-1bad-4761-be20-f8b4bc19f05e" Sep 4 17:20:27.792283 containerd[1457]: time="2024-09-04T17:20:27.792237914Z" level=error msg="Failed to destroy network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.792670 containerd[1457]: time="2024-09-04T17:20:27.792638520Z" level=error msg="encountered an error cleaning up failed sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.792723 containerd[1457]: time="2024-09-04T17:20:27.792695213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bclg9,Uid:63225d84-91a9-409e-b445-bac344cc3e0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.792998 kubelet[2519]: E0904 17:20:27.792972 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:27.793061 kubelet[2519]: E0904 17:20:27.793042 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-bclg9" Sep 4 17:20:27.793086 kubelet[2519]: E0904 17:20:27.793064 2519 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-bclg9" Sep 4 17:20:27.793143 kubelet[2519]: E0904 17:20:27.793129 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-bclg9_kube-system(63225d84-91a9-409e-b445-bac344cc3e0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-bclg9_kube-system(63225d84-91a9-409e-b445-bac344cc3e0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-bclg9" podUID="63225d84-91a9-409e-b445-bac344cc3e0c" Sep 4 17:20:27.801550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9-shm.mount: Deactivated successfully. Sep 4 17:20:27.801695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec-shm.mount: Deactivated successfully. Sep 4 17:20:27.818049 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:47952.service - OpenSSH per-connection server daemon (10.0.0.1:47952). Sep 4 17:20:27.847649 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 47952 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:27.849422 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:27.853506 systemd-logind[1440]: New session 8 of user core. Sep 4 17:20:27.859942 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:20:27.975912 kubelet[2519]: I0904 17:20:27.975776 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:27.976454 containerd[1457]: time="2024-09-04T17:20:27.976418520Z" level=info msg="StopPodSandbox for \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\"" Sep 4 17:20:27.976636 containerd[1457]: time="2024-09-04T17:20:27.976619324Z" level=info msg="Ensure that sandbox 4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec in task-service has been cleanup successfully" Sep 4 17:20:27.978921 kubelet[2519]: I0904 17:20:27.978780 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:27.981855 containerd[1457]: time="2024-09-04T17:20:27.980962672Z" level=info msg="StopPodSandbox for \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\"" Sep 4 17:20:27.981855 containerd[1457]: time="2024-09-04T17:20:27.981206021Z" level=info msg="Ensure that sandbox ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850 in task-service has been cleanup successfully" Sep 4 17:20:27.985007 kubelet[2519]: E0904 17:20:27.984979 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:27.986347 containerd[1457]: time="2024-09-04T17:20:27.986251861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:20:27.986420 kubelet[2519]: I0904 17:20:27.986305 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:27.987070 containerd[1457]: time="2024-09-04T17:20:27.987044374Z" level=info msg="StopPodSandbox for \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\"" Sep 4 17:20:27.987259 containerd[1457]: time="2024-09-04T17:20:27.987214807Z" level=info msg="Ensure that sandbox 9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12 in task-service has been cleanup successfully" Sep 4 17:20:27.988478 kubelet[2519]: I0904 17:20:27.988431 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:27.989776 containerd[1457]: time="2024-09-04T17:20:27.989567124Z" level=info msg="StopPodSandbox for \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\"" Sep 4 17:20:27.990141 containerd[1457]: time="2024-09-04T17:20:27.990112429Z" level=info msg="Ensure that sandbox 15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9 in task-service has been cleanup successfully" Sep 4 17:20:28.023065 sshd[3377]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:28.027495 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:47952.service: Deactivated successfully. Sep 4 17:20:28.030371 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:20:28.031276 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:20:28.032965 systemd-logind[1440]: Removed session 8. Sep 4 17:20:28.033763 containerd[1457]: time="2024-09-04T17:20:28.033716818Z" level=error msg="StopPodSandbox for \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\" failed" error="failed to destroy network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:28.034342 kubelet[2519]: E0904 17:20:28.034069 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:28.034342 kubelet[2519]: E0904 17:20:28.034146 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec"} Sep 4 17:20:28.034342 kubelet[2519]: E0904 17:20:28.034205 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"827502be-1bad-4761-be20-f8b4bc19f05e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:28.034342 kubelet[2519]: E0904 17:20:28.034254 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"827502be-1bad-4761-be20-f8b4bc19f05e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74f559954f-b2z2l" podUID="827502be-1bad-4761-be20-f8b4bc19f05e" Sep 4 17:20:28.063727 containerd[1457]: time="2024-09-04T17:20:28.063537386Z" level=error msg="StopPodSandbox for \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\" failed" error="failed to destroy network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:28.063948 kubelet[2519]: E0904 17:20:28.063919 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:28.064014 kubelet[2519]: E0904 17:20:28.063971 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12"} Sep 4 17:20:28.064057 kubelet[2519]: E0904 17:20:28.064018 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ddb733d-9464-44f3-b6e4-26ba87cd5114\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:28.064057 kubelet[2519]: E0904 17:20:28.064055 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ddb733d-9464-44f3-b6e4-26ba87cd5114\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-wjf2b" podUID="3ddb733d-9464-44f3-b6e4-26ba87cd5114" Sep 4 17:20:28.065842 containerd[1457]: time="2024-09-04T17:20:28.065483781Z" level=error msg="StopPodSandbox for \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\" failed" error="failed to destroy network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:28.066211 kubelet[2519]: E0904 17:20:28.066188 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:28.066290 kubelet[2519]: E0904 17:20:28.066235 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850"} Sep 4 17:20:28.066331 kubelet[2519]: E0904 17:20:28.066292 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2cfa41a-8321-4973-acd5-1a4593214e59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:28.066419 kubelet[2519]: E0904 17:20:28.066333 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2cfa41a-8321-4973-acd5-1a4593214e59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s949d" podUID="d2cfa41a-8321-4973-acd5-1a4593214e59" Sep 4 17:20:28.072051 containerd[1457]: time="2024-09-04T17:20:28.071991415Z" level=error msg="StopPodSandbox for \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\" failed" error="failed to destroy network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:20:28.072317 kubelet[2519]: E0904 17:20:28.072287 2519 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:28.072414 kubelet[2519]: E0904 17:20:28.072336 2519 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9"} Sep 4 17:20:28.072414 kubelet[2519]: E0904 17:20:28.072391 2519 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63225d84-91a9-409e-b445-bac344cc3e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:20:28.072525 kubelet[2519]: E0904 17:20:28.072422 2519 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63225d84-91a9-409e-b445-bac344cc3e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-bclg9" podUID="63225d84-91a9-409e-b445-bac344cc3e0c" Sep 4 17:20:30.346738 kubelet[2519]: I0904 17:20:30.346685 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:20:30.351275 kubelet[2519]: E0904 17:20:30.350422 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:30.996483 kubelet[2519]: E0904 17:20:30.996430 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:31.980998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155838773.mount: Deactivated successfully. Sep 4 17:20:32.688953 containerd[1457]: time="2024-09-04T17:20:32.688875569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:32.690100 containerd[1457]: time="2024-09-04T17:20:32.690013927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Sep 4 17:20:32.691606 containerd[1457]: time="2024-09-04T17:20:32.691565498Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:32.693734 containerd[1457]: time="2024-09-04T17:20:32.693679460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:32.694340 containerd[1457]: time="2024-09-04T17:20:32.694289636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 4.707994537s" Sep 4 17:20:32.694375 containerd[1457]: time="2024-09-04T17:20:32.694340307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Sep 4 17:20:32.706554 containerd[1457]: time="2024-09-04T17:20:32.706511676Z" level=info msg="CreateContainer within sandbox \"6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:20:32.735833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269398119.mount: Deactivated successfully. Sep 4 17:20:32.748451 containerd[1457]: time="2024-09-04T17:20:32.748384613Z" level=info msg="CreateContainer within sandbox \"6ec5d59702655bef532e75cb4962a63038fad0c77f791e5c7e59fbc47ffb531c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9d6efb73e3f7a2ab7a283920c22cffcdc1685d0baabebe0228902e59235b8216\"" Sep 4 17:20:32.749191 containerd[1457]: time="2024-09-04T17:20:32.749156682Z" level=info msg="StartContainer for \"9d6efb73e3f7a2ab7a283920c22cffcdc1685d0baabebe0228902e59235b8216\"" Sep 4 17:20:32.836974 systemd[1]: Started cri-containerd-9d6efb73e3f7a2ab7a283920c22cffcdc1685d0baabebe0228902e59235b8216.scope - libcontainer container 9d6efb73e3f7a2ab7a283920c22cffcdc1685d0baabebe0228902e59235b8216. Sep 4 17:20:33.033537 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:47954.service - OpenSSH per-connection server daemon (10.0.0.1:47954). Sep 4 17:20:33.150218 kubelet[2519]: E0904 17:20:33.127252 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:33.150218 kubelet[2519]: I0904 17:20:33.143653 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-bk2nk" podStartSLOduration=1.069368278 podCreationTimestamp="2024-09-04 17:20:15 +0000 UTC" firstStartedPulling="2024-09-04 17:20:15.620359065 +0000 UTC m=+17.824732947" lastFinishedPulling="2024-09-04 17:20:32.694594783 +0000 UTC m=+34.898968665" observedRunningTime="2024-09-04 17:20:33.143307116 +0000 UTC m=+35.347681028" watchObservedRunningTime="2024-09-04 17:20:33.143603996 +0000 UTC m=+35.347977898" Sep 4 17:20:33.152206 containerd[1457]: time="2024-09-04T17:20:33.123407289Z" level=info msg="StartContainer for \"9d6efb73e3f7a2ab7a283920c22cffcdc1685d0baabebe0228902e59235b8216\" returns successfully" Sep 4 17:20:33.172058 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:20:33.172263 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:20:33.182464 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 47954 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:33.184690 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:33.190527 systemd-logind[1440]: New session 9 of user core. Sep 4 17:20:33.197994 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:20:33.372660 sshd[3540]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:33.378304 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:47954.service: Deactivated successfully. Sep 4 17:20:33.381560 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:20:33.383003 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:20:33.385182 systemd-logind[1440]: Removed session 9. Sep 4 17:20:34.128957 kubelet[2519]: E0904 17:20:34.128928 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:34.736843 kernel: bpftool[3754]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:20:34.979080 systemd-networkd[1396]: vxlan.calico: Link UP Sep 4 17:20:34.979090 systemd-networkd[1396]: vxlan.calico: Gained carrier Sep 4 17:20:36.238067 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL Sep 4 17:20:38.386366 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:44664.service - OpenSSH per-connection server daemon (10.0.0.1:44664). Sep 4 17:20:38.422576 sshd[3828]: Accepted publickey for core from 10.0.0.1 port 44664 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:38.424643 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:38.429356 systemd-logind[1440]: New session 10 of user core. Sep 4 17:20:38.439038 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:20:38.569605 sshd[3828]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:38.575348 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:44664.service: Deactivated successfully. Sep 4 17:20:38.577702 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:20:38.578576 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:20:38.580022 systemd-logind[1440]: Removed session 10. Sep 4 17:20:38.896970 containerd[1457]: time="2024-09-04T17:20:38.896873051Z" level=info msg="StopPodSandbox for \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\"" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:38.952 [INFO][3858] k8s.go 608: Cleaning up netns ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:38.952 [INFO][3858] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" iface="eth0" netns="/var/run/netns/cni-abedadd3-f5e0-d455-7955-7842f8094017" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:38.953 [INFO][3858] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" iface="eth0" netns="/var/run/netns/cni-abedadd3-f5e0-d455-7955-7842f8094017" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:38.953 [INFO][3858] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" iface="eth0" netns="/var/run/netns/cni-abedadd3-f5e0-d455-7955-7842f8094017" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:38.953 [INFO][3858] k8s.go 615: Releasing IP address(es) ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:38.953 [INFO][3858] utils.go 188: Calico CNI releasing IP address ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:39.007 [INFO][3866] ipam_plugin.go 417: Releasing address using handleID ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:39.008 [INFO][3866] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:39.008 [INFO][3866] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:39.015 [WARNING][3866] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:39.015 [INFO][3866] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:39.016 [INFO][3866] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:39.021755 containerd[1457]: 2024-09-04 17:20:39.019 [INFO][3858] k8s.go 621: Teardown processing complete. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:39.022686 containerd[1457]: time="2024-09-04T17:20:39.022520075Z" level=info msg="TearDown network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\" successfully" Sep 4 17:20:39.022686 containerd[1457]: time="2024-09-04T17:20:39.022558241Z" level=info msg="StopPodSandbox for \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\" returns successfully" Sep 4 17:20:39.024384 systemd[1]: run-netns-cni\x2dabedadd3\x2df5e0\x2dd455\x2d7955\x2d7842f8094017.mount: Deactivated successfully. Sep 4 17:20:39.028067 containerd[1457]: time="2024-09-04T17:20:39.028032227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f559954f-b2z2l,Uid:827502be-1bad-4761-be20-f8b4bc19f05e,Namespace:calico-system,Attempt:1,}" Sep 4 17:20:39.678444 systemd-networkd[1396]: calibc973f42a64: Link UP Sep 4 17:20:39.679125 systemd-networkd[1396]: calibc973f42a64: Gained carrier Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.474 [INFO][3873] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0 calico-kube-controllers-74f559954f- calico-system 827502be-1bad-4761-be20-f8b4bc19f05e 799 0 2024-09-04 17:20:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74f559954f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74f559954f-b2z2l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibc973f42a64 [] []}} ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.474 [INFO][3873] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.498 [INFO][3886] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" HandleID="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.550 [INFO][3886] ipam_plugin.go 270: Auto assigning IP ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" HandleID="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026ee30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74f559954f-b2z2l", "timestamp":"2024-09-04 17:20:39.498188491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.550 [INFO][3886] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.550 [INFO][3886] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.550 [INFO][3886] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.551 [INFO][3886] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.560 [INFO][3886] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.568 [INFO][3886] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.570 [INFO][3886] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.573 [INFO][3886] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.573 [INFO][3886] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.575 [INFO][3886] ipam.go 1685: Creating new handle: k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.580 [INFO][3886] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.672 [INFO][3886] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.672 [INFO][3886] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" host="localhost" Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.672 [INFO][3886] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:39.691539 containerd[1457]: 2024-09-04 17:20:39.672 [INFO][3886] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" HandleID="k8s-pod-network.9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.692177 containerd[1457]: 2024-09-04 17:20:39.675 [INFO][3873] k8s.go 386: Populated endpoint ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0", GenerateName:"calico-kube-controllers-74f559954f-", Namespace:"calico-system", SelfLink:"", UID:"827502be-1bad-4761-be20-f8b4bc19f05e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f559954f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74f559954f-b2z2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc973f42a64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:39.692177 containerd[1457]: 2024-09-04 17:20:39.675 [INFO][3873] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.692177 containerd[1457]: 2024-09-04 17:20:39.675 [INFO][3873] dataplane_linux.go 68: Setting the host side veth name to calibc973f42a64 ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.692177 containerd[1457]: 2024-09-04 17:20:39.678 [INFO][3873] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.692177 containerd[1457]: 2024-09-04 17:20:39.678 [INFO][3873] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0", GenerateName:"calico-kube-controllers-74f559954f-", Namespace:"calico-system", SelfLink:"", UID:"827502be-1bad-4761-be20-f8b4bc19f05e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f559954f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab", Pod:"calico-kube-controllers-74f559954f-b2z2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc973f42a64", MAC:"42:87:ca:b8:9f:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:39.692177 containerd[1457]: 2024-09-04 17:20:39.685 [INFO][3873] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab" Namespace="calico-system" Pod="calico-kube-controllers-74f559954f-b2z2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:39.744263 containerd[1457]: time="2024-09-04T17:20:39.744154715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:39.744870 containerd[1457]: time="2024-09-04T17:20:39.744783154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:39.744870 containerd[1457]: time="2024-09-04T17:20:39.744839745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:39.745012 containerd[1457]: time="2024-09-04T17:20:39.744854825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:39.784969 systemd[1]: Started cri-containerd-9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab.scope - libcontainer container 9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab. Sep 4 17:20:39.796701 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:39.820176 containerd[1457]: time="2024-09-04T17:20:39.820123264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f559954f-b2z2l,Uid:827502be-1bad-4761-be20-f8b4bc19f05e,Namespace:calico-system,Attempt:1,} returns sandbox id \"9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab\"" Sep 4 17:20:39.821504 containerd[1457]: time="2024-09-04T17:20:39.821414221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:20:41.230053 systemd-networkd[1396]: calibc973f42a64: Gained IPv6LL Sep 4 17:20:41.897228 containerd[1457]: time="2024-09-04T17:20:41.897157950Z" level=info msg="StopPodSandbox for \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\"" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.946 [INFO][3977] k8s.go 608: Cleaning up netns ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.947 [INFO][3977] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" iface="eth0" netns="/var/run/netns/cni-a5877a91-83dd-1814-74f6-9ba8e1b80cfb" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.947 [INFO][3977] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" iface="eth0" netns="/var/run/netns/cni-a5877a91-83dd-1814-74f6-9ba8e1b80cfb" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.947 [INFO][3977] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" iface="eth0" netns="/var/run/netns/cni-a5877a91-83dd-1814-74f6-9ba8e1b80cfb" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.947 [INFO][3977] k8s.go 615: Releasing IP address(es) ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.947 [INFO][3977] utils.go 188: Calico CNI releasing IP address ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.970 [INFO][3985] ipam_plugin.go 417: Releasing address using handleID ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.970 [INFO][3985] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:41.971 [INFO][3985] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:42.013 [WARNING][3985] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:42.013 [INFO][3985] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:42.015 [INFO][3985] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:42.023992 containerd[1457]: 2024-09-04 17:20:42.018 [INFO][3977] k8s.go 621: Teardown processing complete. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:42.023992 containerd[1457]: time="2024-09-04T17:20:42.021541874Z" level=info msg="TearDown network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\" successfully" Sep 4 17:20:42.023992 containerd[1457]: time="2024-09-04T17:20:42.021574728Z" level=info msg="StopPodSandbox for \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\" returns successfully" Sep 4 17:20:42.023992 containerd[1457]: time="2024-09-04T17:20:42.022783554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bclg9,Uid:63225d84-91a9-409e-b445-bac344cc3e0c,Namespace:kube-system,Attempt:1,}" Sep 4 17:20:42.024649 kubelet[2519]: E0904 17:20:42.021927 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:42.024904 systemd[1]: run-netns-cni\x2da5877a91\x2d83dd\x2d1814\x2d74f6\x2d9ba8e1b80cfb.mount: Deactivated successfully. Sep 4 17:20:42.230060 containerd[1457]: time="2024-09-04T17:20:42.229899878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:42.235136 containerd[1457]: time="2024-09-04T17:20:42.235052344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Sep 4 17:20:42.238912 containerd[1457]: time="2024-09-04T17:20:42.238722997Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:42.245147 containerd[1457]: time="2024-09-04T17:20:42.245098175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:42.245535 containerd[1457]: time="2024-09-04T17:20:42.245506467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.424063369s" Sep 4 17:20:42.246001 containerd[1457]: time="2024-09-04T17:20:42.245538420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Sep 4 17:20:42.264315 containerd[1457]: time="2024-09-04T17:20:42.264148510Z" level=info msg="CreateContainer within sandbox \"9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:20:42.281476 containerd[1457]: time="2024-09-04T17:20:42.281426603Z" level=info msg="CreateContainer within sandbox \"9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"11a0a9e32e8da25eec6940e0a3037bdcce7ad473516362d351b5fe1f5b486730\"" Sep 4 17:20:42.282093 containerd[1457]: time="2024-09-04T17:20:42.282058926Z" level=info msg="StartContainer for \"11a0a9e32e8da25eec6940e0a3037bdcce7ad473516362d351b5fe1f5b486730\"" Sep 4 17:20:42.317106 systemd[1]: Started cri-containerd-11a0a9e32e8da25eec6940e0a3037bdcce7ad473516362d351b5fe1f5b486730.scope - libcontainer container 11a0a9e32e8da25eec6940e0a3037bdcce7ad473516362d351b5fe1f5b486730. Sep 4 17:20:42.361738 systemd-networkd[1396]: cali24efe6a85f0: Link UP Sep 4 17:20:42.362008 systemd-networkd[1396]: cali24efe6a85f0: Gained carrier Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.278 [INFO][3995] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--bclg9-eth0 coredns-5dd5756b68- kube-system 63225d84-91a9-409e-b445-bac344cc3e0c 818 0 2024-09-04 17:20:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-bclg9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24efe6a85f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.278 [INFO][3995] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.316 [INFO][4013] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" HandleID="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.325 [INFO][4013] ipam_plugin.go 270: Auto assigning IP ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" HandleID="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000519a30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-bclg9", "timestamp":"2024-09-04 17:20:42.316362146 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.326 [INFO][4013] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.326 [INFO][4013] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.326 [INFO][4013] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.327 [INFO][4013] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.332 [INFO][4013] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.340 [INFO][4013] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.342 [INFO][4013] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.345 [INFO][4013] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.345 [INFO][4013] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.346 [INFO][4013] ipam.go 1685: Creating new handle: k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498 Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.350 [INFO][4013] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.354 [INFO][4013] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.354 [INFO][4013] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" host="localhost" Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.354 [INFO][4013] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:42.379075 containerd[1457]: 2024-09-04 17:20:42.354 [INFO][4013] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" HandleID="k8s-pod-network.1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.379696 containerd[1457]: 2024-09-04 17:20:42.358 [INFO][3995] k8s.go 386: Populated endpoint ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bclg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"63225d84-91a9-409e-b445-bac344cc3e0c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-bclg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24efe6a85f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:42.379696 containerd[1457]: 2024-09-04 17:20:42.358 [INFO][3995] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.379696 containerd[1457]: 2024-09-04 17:20:42.358 [INFO][3995] dataplane_linux.go 68: Setting the host side veth name to cali24efe6a85f0 ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.379696 containerd[1457]: 2024-09-04 17:20:42.360 [INFO][3995] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.379696 containerd[1457]: 2024-09-04 17:20:42.362 [INFO][3995] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bclg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"63225d84-91a9-409e-b445-bac344cc3e0c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498", Pod:"coredns-5dd5756b68-bclg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24efe6a85f0", MAC:"6a:4a:22:86:ee:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:42.379696 containerd[1457]: 2024-09-04 17:20:42.373 [INFO][3995] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498" Namespace="kube-system" Pod="coredns-5dd5756b68-bclg9" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:42.384504 containerd[1457]: time="2024-09-04T17:20:42.383802492Z" level=info msg="StartContainer for \"11a0a9e32e8da25eec6940e0a3037bdcce7ad473516362d351b5fe1f5b486730\" returns successfully" Sep 4 17:20:42.404213 containerd[1457]: time="2024-09-04T17:20:42.404053157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:42.404213 containerd[1457]: time="2024-09-04T17:20:42.404102413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:42.404951 containerd[1457]: time="2024-09-04T17:20:42.404890021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:42.405067 containerd[1457]: time="2024-09-04T17:20:42.405027421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:42.425029 systemd[1]: Started cri-containerd-1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498.scope - libcontainer container 1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498. Sep 4 17:20:42.441178 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:42.469129 containerd[1457]: time="2024-09-04T17:20:42.469014172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bclg9,Uid:63225d84-91a9-409e-b445-bac344cc3e0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498\"" Sep 4 17:20:42.470114 kubelet[2519]: E0904 17:20:42.469948 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:42.472844 containerd[1457]: time="2024-09-04T17:20:42.472651469Z" level=info msg="CreateContainer within sandbox \"1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:20:42.885615 containerd[1457]: time="2024-09-04T17:20:42.885524767Z" level=info msg="CreateContainer within sandbox \"1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f22b75983ee12cb843590513f05f8b7a76fe1b15cafc29673825f8732e809291\"" Sep 4 17:20:42.886103 containerd[1457]: time="2024-09-04T17:20:42.886081401Z" level=info msg="StartContainer for \"f22b75983ee12cb843590513f05f8b7a76fe1b15cafc29673825f8732e809291\"" Sep 4 17:20:42.897755 containerd[1457]: time="2024-09-04T17:20:42.897709742Z" level=info msg="StopPodSandbox for \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\"" Sep 4 17:20:42.922091 systemd[1]: Started cri-containerd-f22b75983ee12cb843590513f05f8b7a76fe1b15cafc29673825f8732e809291.scope - libcontainer container f22b75983ee12cb843590513f05f8b7a76fe1b15cafc29673825f8732e809291. Sep 4 17:20:42.968840 containerd[1457]: time="2024-09-04T17:20:42.967615403Z" level=info msg="StartContainer for \"f22b75983ee12cb843590513f05f8b7a76fe1b15cafc29673825f8732e809291\" returns successfully" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.961 [INFO][4145] k8s.go 608: Cleaning up netns ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.961 [INFO][4145] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" iface="eth0" netns="/var/run/netns/cni-bc8abc7a-30b9-c3c6-e60d-c89321460db9" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.962 [INFO][4145] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" iface="eth0" netns="/var/run/netns/cni-bc8abc7a-30b9-c3c6-e60d-c89321460db9" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.962 [INFO][4145] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" iface="eth0" netns="/var/run/netns/cni-bc8abc7a-30b9-c3c6-e60d-c89321460db9" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.962 [INFO][4145] k8s.go 615: Releasing IP address(es) ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.962 [INFO][4145] utils.go 188: Calico CNI releasing IP address ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.990 [INFO][4166] ipam_plugin.go 417: Releasing address using handleID ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.990 [INFO][4166] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:42.990 [INFO][4166] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:43.009 [WARNING][4166] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:43.009 [INFO][4166] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:43.011 [INFO][4166] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:43.016730 containerd[1457]: 2024-09-04 17:20:43.013 [INFO][4145] k8s.go 621: Teardown processing complete. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:43.017566 containerd[1457]: time="2024-09-04T17:20:43.016962580Z" level=info msg="TearDown network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\" successfully" Sep 4 17:20:43.017566 containerd[1457]: time="2024-09-04T17:20:43.016988932Z" level=info msg="StopPodSandbox for \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\" returns successfully" Sep 4 17:20:43.018383 kubelet[2519]: E0904 17:20:43.018161 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:43.018736 containerd[1457]: time="2024-09-04T17:20:43.018696844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wjf2b,Uid:3ddb733d-9464-44f3-b6e4-26ba87cd5114,Namespace:kube-system,Attempt:1,}" Sep 4 17:20:43.027422 systemd[1]: run-netns-cni\x2dbc8abc7a\x2d30b9\x2dc3c6\x2de60d\x2dc89321460db9.mount: Deactivated successfully. Sep 4 17:20:43.158182 kubelet[2519]: E0904 17:20:43.156060 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:43.174493 kubelet[2519]: I0904 17:20:43.174430 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bclg9" podStartSLOduration=33.174345865 podCreationTimestamp="2024-09-04 17:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:43.173548109 +0000 UTC m=+45.377922001" watchObservedRunningTime="2024-09-04 17:20:43.174345865 +0000 UTC m=+45.378719747" Sep 4 17:20:43.190308 kubelet[2519]: I0904 17:20:43.190272 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74f559954f-b2z2l" podStartSLOduration=25.765583555 podCreationTimestamp="2024-09-04 17:20:15 +0000 UTC" firstStartedPulling="2024-09-04 17:20:39.82116092 +0000 UTC m=+42.025534802" lastFinishedPulling="2024-09-04 17:20:42.245800194 +0000 UTC m=+44.450174076" observedRunningTime="2024-09-04 17:20:43.187474754 +0000 UTC m=+45.391848637" watchObservedRunningTime="2024-09-04 17:20:43.190222829 +0000 UTC m=+45.394596711" Sep 4 17:20:43.561484 systemd-networkd[1396]: calidd59f146ff0: Link UP Sep 4 17:20:43.561756 systemd-networkd[1396]: calidd59f146ff0: Gained carrier Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.184 [INFO][4179] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--wjf2b-eth0 coredns-5dd5756b68- kube-system 3ddb733d-9464-44f3-b6e4-26ba87cd5114 839 0 2024-09-04 17:20:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-wjf2b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd59f146ff0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.184 [INFO][4179] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.231 [INFO][4213] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" HandleID="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.341 [INFO][4213] ipam_plugin.go 270: Auto assigning IP ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" HandleID="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308360), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-wjf2b", "timestamp":"2024-09-04 17:20:43.231149381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.341 [INFO][4213] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.341 [INFO][4213] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.341 [INFO][4213] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.343 [INFO][4213] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.415 [INFO][4213] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.542 [INFO][4213] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.545 [INFO][4213] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.547 [INFO][4213] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.547 [INFO][4213] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.548 [INFO][4213] ipam.go 1685: Creating new handle: k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.551 [INFO][4213] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.555 [INFO][4213] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.555 [INFO][4213] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" host="localhost" Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.555 [INFO][4213] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:43.575691 containerd[1457]: 2024-09-04 17:20:43.555 [INFO][4213] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" HandleID="k8s-pod-network.1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.576339 containerd[1457]: 2024-09-04 17:20:43.558 [INFO][4179] k8s.go 386: Populated endpoint ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wjf2b-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3ddb733d-9464-44f3-b6e4-26ba87cd5114", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-wjf2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd59f146ff0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:43.576339 containerd[1457]: 2024-09-04 17:20:43.558 [INFO][4179] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.576339 containerd[1457]: 2024-09-04 17:20:43.558 [INFO][4179] dataplane_linux.go 68: Setting the host side veth name to calidd59f146ff0 ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.576339 containerd[1457]: 2024-09-04 17:20:43.560 [INFO][4179] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.576339 containerd[1457]: 2024-09-04 17:20:43.561 [INFO][4179] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wjf2b-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3ddb733d-9464-44f3-b6e4-26ba87cd5114", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d", Pod:"coredns-5dd5756b68-wjf2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd59f146ff0", MAC:"ee:16:4e:1a:2d:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:43.576339 containerd[1457]: 2024-09-04 17:20:43.571 [INFO][4179] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d" Namespace="kube-system" Pod="coredns-5dd5756b68-wjf2b" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:43.590313 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:44680.service - OpenSSH per-connection server daemon (10.0.0.1:44680). Sep 4 17:20:43.620213 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 44680 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:43.622040 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:43.626466 systemd-logind[1440]: New session 11 of user core. Sep 4 17:20:43.636968 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:20:43.672570 containerd[1457]: time="2024-09-04T17:20:43.672397821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:43.672570 containerd[1457]: time="2024-09-04T17:20:43.672525151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:43.672570 containerd[1457]: time="2024-09-04T17:20:43.672554548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:43.672570 containerd[1457]: time="2024-09-04T17:20:43.672573906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:43.692114 systemd[1]: Started cri-containerd-1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d.scope - libcontainer container 1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d. Sep 4 17:20:43.707210 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:43.727096 systemd-networkd[1396]: cali24efe6a85f0: Gained IPv6LL Sep 4 17:20:43.736455 containerd[1457]: time="2024-09-04T17:20:43.736025511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wjf2b,Uid:3ddb733d-9464-44f3-b6e4-26ba87cd5114,Namespace:kube-system,Attempt:1,} returns sandbox id \"1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d\"" Sep 4 17:20:43.736961 kubelet[2519]: E0904 17:20:43.736934 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:43.739123 containerd[1457]: time="2024-09-04T17:20:43.739085898Z" level=info msg="CreateContainer within sandbox \"1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:20:43.787001 sshd[4243]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:43.799131 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:44680.service: Deactivated successfully. Sep 4 17:20:43.801115 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:20:43.801888 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:20:43.813242 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:44682.service - OpenSSH per-connection server daemon (10.0.0.1:44682). Sep 4 17:20:43.814005 systemd-logind[1440]: Removed session 11. Sep 4 17:20:43.819987 containerd[1457]: time="2024-09-04T17:20:43.819915532Z" level=info msg="CreateContainer within sandbox \"1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"138c930be69896ab209a222d5f05722b6ee78a277adbcc04d89a34280cde721e\"" Sep 4 17:20:43.820851 containerd[1457]: time="2024-09-04T17:20:43.820731543Z" level=info msg="StartContainer for \"138c930be69896ab209a222d5f05722b6ee78a277adbcc04d89a34280cde721e\"" Sep 4 17:20:43.844148 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 44682 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:43.844934 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:43.850009 systemd[1]: Started cri-containerd-138c930be69896ab209a222d5f05722b6ee78a277adbcc04d89a34280cde721e.scope - libcontainer container 138c930be69896ab209a222d5f05722b6ee78a277adbcc04d89a34280cde721e. Sep 4 17:20:43.856791 systemd-logind[1440]: New session 12 of user core. Sep 4 17:20:43.862014 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:20:43.898883 containerd[1457]: time="2024-09-04T17:20:43.898396354Z" level=info msg="StopPodSandbox for \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\"" Sep 4 17:20:44.262149 containerd[1457]: time="2024-09-04T17:20:44.262081846Z" level=info msg="StartContainer for \"138c930be69896ab209a222d5f05722b6ee78a277adbcc04d89a34280cde721e\" returns successfully" Sep 4 17:20:44.266653 kubelet[2519]: E0904 17:20:44.266620 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:44.267175 kubelet[2519]: E0904 17:20:44.266996 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.072 [INFO][4353] k8s.go 608: Cleaning up netns ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.072 [INFO][4353] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" iface="eth0" netns="/var/run/netns/cni-b3324f37-142f-88fd-22ea-276095eb8aba" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.073 [INFO][4353] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" iface="eth0" netns="/var/run/netns/cni-b3324f37-142f-88fd-22ea-276095eb8aba" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.073 [INFO][4353] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" iface="eth0" netns="/var/run/netns/cni-b3324f37-142f-88fd-22ea-276095eb8aba" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.073 [INFO][4353] k8s.go 615: Releasing IP address(es) ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.073 [INFO][4353] utils.go 188: Calico CNI releasing IP address ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.237 [INFO][4366] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.237 [INFO][4366] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.237 [INFO][4366] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.324 [WARNING][4366] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.324 [INFO][4366] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.355 [INFO][4366] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:44.364920 containerd[1457]: 2024-09-04 17:20:44.360 [INFO][4353] k8s.go 621: Teardown processing complete. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:44.366344 containerd[1457]: time="2024-09-04T17:20:44.366294707Z" level=info msg="TearDown network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\" successfully" Sep 4 17:20:44.366344 containerd[1457]: time="2024-09-04T17:20:44.366341559Z" level=info msg="StopPodSandbox for \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\" returns successfully" Sep 4 17:20:44.368432 containerd[1457]: time="2024-09-04T17:20:44.368369164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s949d,Uid:d2cfa41a-8321-4973-acd5-1a4593214e59,Namespace:calico-system,Attempt:1,}" Sep 4 17:20:44.369064 systemd[1]: run-netns-cni\x2db3324f37\x2d142f\x2d88fd\x2d22ea\x2d276095eb8aba.mount: Deactivated successfully. Sep 4 17:20:44.399389 kubelet[2519]: I0904 17:20:44.397186 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wjf2b" podStartSLOduration=34.39713917 podCreationTimestamp="2024-09-04 17:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:20:44.355585449 +0000 UTC m=+46.559959331" watchObservedRunningTime="2024-09-04 17:20:44.39713917 +0000 UTC m=+46.601513062" Sep 4 17:20:44.650336 sshd[4298]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:44.660296 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:44682.service: Deactivated successfully. Sep 4 17:20:44.664592 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:20:44.667750 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:20:44.681242 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:44686.service - OpenSSH per-connection server daemon (10.0.0.1:44686). Sep 4 17:20:44.681978 systemd-logind[1440]: Removed session 12. Sep 4 17:20:44.748604 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 44686 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:44.750358 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:44.756597 systemd-logind[1440]: New session 13 of user core. Sep 4 17:20:44.768969 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:20:45.107622 sshd[4386]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:45.111759 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:44686.service: Deactivated successfully. Sep 4 17:20:45.114419 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:20:45.115397 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:20:45.116644 systemd-logind[1440]: Removed session 13. Sep 4 17:20:45.190865 systemd-networkd[1396]: calibee2a0405ec: Link UP Sep 4 17:20:45.191953 systemd-networkd[1396]: calibee2a0405ec: Gained carrier Sep 4 17:20:45.198097 systemd-networkd[1396]: calidd59f146ff0: Gained IPv6LL Sep 4 17:20:45.268867 kubelet[2519]: E0904 17:20:45.267923 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:45.268867 kubelet[2519]: E0904 17:20:45.268648 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:44.885 [INFO][4389] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s949d-eth0 csi-node-driver- calico-system d2cfa41a-8321-4973-acd5-1a4593214e59 862 0 2024-09-04 17:20:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-s949d eth0 default [] [] [kns.calico-system ksa.calico-system.default] calibee2a0405ec [] []}} ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:44.886 [INFO][4389] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.130 [INFO][4411] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" HandleID="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.138 [INFO][4411] ipam_plugin.go 270: Auto assigning IP ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" HandleID="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Workload="localhost-k8s-csi--node--driver--s949d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005918b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s949d", "timestamp":"2024-09-04 17:20:45.13013047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.138 [INFO][4411] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.138 [INFO][4411] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.138 [INFO][4411] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.140 [INFO][4411] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.148 [INFO][4411] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.152 [INFO][4411] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.153 [INFO][4411] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.156 [INFO][4411] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.156 [INFO][4411] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.157 [INFO][4411] ipam.go 1685: Creating new handle: k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274 Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.159 [INFO][4411] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.184 [INFO][4411] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.184 [INFO][4411] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" host="localhost" Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.184 [INFO][4411] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:45.403105 containerd[1457]: 2024-09-04 17:20:45.184 [INFO][4411] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" HandleID="k8s-pod-network.1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:45.404210 containerd[1457]: 2024-09-04 17:20:45.187 [INFO][4389] k8s.go 386: Populated endpoint ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s949d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d2cfa41a-8321-4973-acd5-1a4593214e59", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s949d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibee2a0405ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:45.404210 containerd[1457]: 2024-09-04 17:20:45.187 [INFO][4389] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:45.404210 containerd[1457]: 2024-09-04 17:20:45.187 [INFO][4389] dataplane_linux.go 68: Setting the host side veth name to calibee2a0405ec ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:45.404210 containerd[1457]: 2024-09-04 17:20:45.191 [INFO][4389] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:45.404210 containerd[1457]: 2024-09-04 17:20:45.191 [INFO][4389] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s949d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d2cfa41a-8321-4973-acd5-1a4593214e59", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274", Pod:"csi-node-driver-s949d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibee2a0405ec", MAC:"2e:99:9f:46:aa:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:45.404210 containerd[1457]: 2024-09-04 17:20:45.398 [INFO][4389] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274" Namespace="calico-system" Pod="csi-node-driver-s949d" WorkloadEndpoint="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:45.455452 containerd[1457]: time="2024-09-04T17:20:45.455261329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:45.455452 containerd[1457]: time="2024-09-04T17:20:45.455321707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:45.455452 containerd[1457]: time="2024-09-04T17:20:45.455338550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:45.455452 containerd[1457]: time="2024-09-04T17:20:45.455350223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:45.479974 systemd[1]: Started cri-containerd-1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274.scope - libcontainer container 1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274. Sep 4 17:20:45.493113 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:20:45.504638 containerd[1457]: time="2024-09-04T17:20:45.504580248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s949d,Uid:d2cfa41a-8321-4973-acd5-1a4593214e59,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274\"" Sep 4 17:20:45.505962 containerd[1457]: time="2024-09-04T17:20:45.505942005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:20:46.277978 kubelet[2519]: E0904 17:20:46.277946 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:20:46.350027 systemd-networkd[1396]: calibee2a0405ec: Gained IPv6LL Sep 4 17:20:48.510287 containerd[1457]: time="2024-09-04T17:20:48.510221728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:48.556317 containerd[1457]: time="2024-09-04T17:20:48.556219566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Sep 4 17:20:48.573656 containerd[1457]: time="2024-09-04T17:20:48.572439845Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:48.585995 containerd[1457]: time="2024-09-04T17:20:48.585921617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:48.587430 containerd[1457]: time="2024-09-04T17:20:48.587383712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 3.081409755s" Sep 4 17:20:48.587510 containerd[1457]: time="2024-09-04T17:20:48.587437046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Sep 4 17:20:48.589746 containerd[1457]: time="2024-09-04T17:20:48.589541715Z" level=info msg="CreateContainer within sandbox \"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:20:48.919468 containerd[1457]: time="2024-09-04T17:20:48.919338191Z" level=info msg="CreateContainer within sandbox \"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"74b492c4b13583fcc45adc1a0abf30a4010d063a22f02f6970a6976eb542a21f\"" Sep 4 17:20:48.920990 containerd[1457]: time="2024-09-04T17:20:48.920405315Z" level=info msg="StartContainer for \"74b492c4b13583fcc45adc1a0abf30a4010d063a22f02f6970a6976eb542a21f\"" Sep 4 17:20:48.951966 systemd[1]: Started cri-containerd-74b492c4b13583fcc45adc1a0abf30a4010d063a22f02f6970a6976eb542a21f.scope - libcontainer container 74b492c4b13583fcc45adc1a0abf30a4010d063a22f02f6970a6976eb542a21f. Sep 4 17:20:49.921284 containerd[1457]: time="2024-09-04T17:20:49.920485103Z" level=info msg="StartContainer for \"74b492c4b13583fcc45adc1a0abf30a4010d063a22f02f6970a6976eb542a21f\" returns successfully" Sep 4 17:20:49.921997 containerd[1457]: time="2024-09-04T17:20:49.921916735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:20:50.124215 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:38272.service - OpenSSH per-connection server daemon (10.0.0.1:38272). Sep 4 17:20:50.159627 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 38272 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:50.160423 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:50.166058 systemd-logind[1440]: New session 14 of user core. Sep 4 17:20:50.171012 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:20:50.436602 sshd[4535]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:50.440294 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:38272.service: Deactivated successfully. Sep 4 17:20:50.442390 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:20:50.443180 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:20:50.444153 systemd-logind[1440]: Removed session 14. Sep 4 17:20:52.546343 containerd[1457]: time="2024-09-04T17:20:52.546265710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:52.579708 containerd[1457]: time="2024-09-04T17:20:52.579585454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Sep 4 17:20:52.595495 containerd[1457]: time="2024-09-04T17:20:52.595431438Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:52.617538 containerd[1457]: time="2024-09-04T17:20:52.617460701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:52.618255 containerd[1457]: time="2024-09-04T17:20:52.618203145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 2.696254206s" Sep 4 17:20:52.618301 containerd[1457]: time="2024-09-04T17:20:52.618255768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Sep 4 17:20:52.620522 containerd[1457]: time="2024-09-04T17:20:52.620468752Z" level=info msg="CreateContainer within sandbox \"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:20:52.797058 containerd[1457]: time="2024-09-04T17:20:52.796850757Z" level=info msg="CreateContainer within sandbox \"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"41e3ebe089943f1eba897d0544e6f0408e11975819591733a4db05c1cb4f4c2f\"" Sep 4 17:20:52.798091 containerd[1457]: time="2024-09-04T17:20:52.798040221Z" level=info msg="StartContainer for \"41e3ebe089943f1eba897d0544e6f0408e11975819591733a4db05c1cb4f4c2f\"" Sep 4 17:20:52.831903 systemd[1]: Started cri-containerd-41e3ebe089943f1eba897d0544e6f0408e11975819591733a4db05c1cb4f4c2f.scope - libcontainer container 41e3ebe089943f1eba897d0544e6f0408e11975819591733a4db05c1cb4f4c2f. Sep 4 17:20:52.867077 containerd[1457]: time="2024-09-04T17:20:52.867019262Z" level=info msg="StartContainer for \"41e3ebe089943f1eba897d0544e6f0408e11975819591733a4db05c1cb4f4c2f\" returns successfully" Sep 4 17:20:52.939755 kubelet[2519]: I0904 17:20:52.939715 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-s949d" podStartSLOduration=30.826827402 podCreationTimestamp="2024-09-04 17:20:15 +0000 UTC" firstStartedPulling="2024-09-04 17:20:45.505730631 +0000 UTC m=+47.710104503" lastFinishedPulling="2024-09-04 17:20:52.618568395 +0000 UTC m=+54.822942277" observedRunningTime="2024-09-04 17:20:52.939599658 +0000 UTC m=+55.143973540" watchObservedRunningTime="2024-09-04 17:20:52.939665176 +0000 UTC m=+55.144039058" Sep 4 17:20:52.982348 kubelet[2519]: I0904 17:20:52.982311 2519 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:20:52.982348 kubelet[2519]: I0904 17:20:52.982353 2519 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:20:55.449995 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:38286.service - OpenSSH per-connection server daemon (10.0.0.1:38286). Sep 4 17:20:55.490069 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 38286 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:20:55.491846 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:20:55.496310 systemd-logind[1440]: New session 15 of user core. Sep 4 17:20:55.503987 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:20:55.635091 sshd[4645]: pam_unix(sshd:session): session closed for user core Sep 4 17:20:55.639256 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:38286.service: Deactivated successfully. Sep 4 17:20:55.641277 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:20:55.641946 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:20:55.642804 systemd-logind[1440]: Removed session 15. Sep 4 17:20:57.875841 containerd[1457]: time="2024-09-04T17:20:57.875452714Z" level=info msg="StopPodSandbox for \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\"" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.072 [WARNING][4678] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s949d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d2cfa41a-8321-4973-acd5-1a4593214e59", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274", Pod:"csi-node-driver-s949d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibee2a0405ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.072 [INFO][4678] k8s.go 608: Cleaning up netns ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.072 [INFO][4678] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" iface="eth0" netns="" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.072 [INFO][4678] k8s.go 615: Releasing IP address(es) ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.073 [INFO][4678] utils.go 188: Calico CNI releasing IP address ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.095 [INFO][4687] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.095 [INFO][4687] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.095 [INFO][4687] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.100 [WARNING][4687] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.100 [INFO][4687] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.101 [INFO][4687] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.105983 containerd[1457]: 2024-09-04 17:20:58.103 [INFO][4678] k8s.go 621: Teardown processing complete. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.106672 containerd[1457]: time="2024-09-04T17:20:58.106022489Z" level=info msg="TearDown network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\" successfully" Sep 4 17:20:58.106672 containerd[1457]: time="2024-09-04T17:20:58.106051394Z" level=info msg="StopPodSandbox for \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\" returns successfully" Sep 4 17:20:58.106672 containerd[1457]: time="2024-09-04T17:20:58.106612731Z" level=info msg="RemovePodSandbox for \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\"" Sep 4 17:20:58.109597 containerd[1457]: time="2024-09-04T17:20:58.109570709Z" level=info msg="Forcibly stopping sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\"" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.156 [WARNING][4711] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s949d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d2cfa41a-8321-4973-acd5-1a4593214e59", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a4a12c08adc81eae97b1ec2fc84f65c5d7376cfa67a184c1f36248323bee274", Pod:"csi-node-driver-s949d", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibee2a0405ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.156 [INFO][4711] k8s.go 608: Cleaning up netns ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.156 [INFO][4711] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" iface="eth0" netns="" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.156 [INFO][4711] k8s.go 615: Releasing IP address(es) ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.156 [INFO][4711] utils.go 188: Calico CNI releasing IP address ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.178 [INFO][4719] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.178 [INFO][4719] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.178 [INFO][4719] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.183 [WARNING][4719] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.183 [INFO][4719] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" HandleID="k8s-pod-network.ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Workload="localhost-k8s-csi--node--driver--s949d-eth0" Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.184 [INFO][4719] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.188786 containerd[1457]: 2024-09-04 17:20:58.186 [INFO][4711] k8s.go 621: Teardown processing complete. ContainerID="ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850" Sep 4 17:20:58.188786 containerd[1457]: time="2024-09-04T17:20:58.188755030Z" level=info msg="TearDown network for sandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\" successfully" Sep 4 17:20:58.295435 containerd[1457]: time="2024-09-04T17:20:58.295378846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:58.300785 containerd[1457]: time="2024-09-04T17:20:58.300752460Z" level=info msg="RemovePodSandbox \"ee6e67217fd9acc6b5734210ae36f09f5c91dc6152f67f0f41ecd57c1929e850\" returns successfully" Sep 4 17:20:58.301280 containerd[1457]: time="2024-09-04T17:20:58.301241186Z" level=info msg="StopPodSandbox for \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\"" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.335 [WARNING][4743] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0", GenerateName:"calico-kube-controllers-74f559954f-", Namespace:"calico-system", SelfLink:"", UID:"827502be-1bad-4761-be20-f8b4bc19f05e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f559954f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab", Pod:"calico-kube-controllers-74f559954f-b2z2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc973f42a64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.335 [INFO][4743] k8s.go 608: Cleaning up netns ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.335 [INFO][4743] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" iface="eth0" netns="" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.335 [INFO][4743] k8s.go 615: Releasing IP address(es) ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.335 [INFO][4743] utils.go 188: Calico CNI releasing IP address ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.356 [INFO][4750] ipam_plugin.go 417: Releasing address using handleID ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.356 [INFO][4750] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.356 [INFO][4750] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.361 [WARNING][4750] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.361 [INFO][4750] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.362 [INFO][4750] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.367454 containerd[1457]: 2024-09-04 17:20:58.365 [INFO][4743] k8s.go 621: Teardown processing complete. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.367988 containerd[1457]: time="2024-09-04T17:20:58.367488191Z" level=info msg="TearDown network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\" successfully" Sep 4 17:20:58.367988 containerd[1457]: time="2024-09-04T17:20:58.367514142Z" level=info msg="StopPodSandbox for \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\" returns successfully" Sep 4 17:20:58.368064 containerd[1457]: time="2024-09-04T17:20:58.368031464Z" level=info msg="RemovePodSandbox for \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\"" Sep 4 17:20:58.368099 containerd[1457]: time="2024-09-04T17:20:58.368075378Z" level=info msg="Forcibly stopping sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\"" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.404 [WARNING][4773] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0", GenerateName:"calico-kube-controllers-74f559954f-", Namespace:"calico-system", SelfLink:"", UID:"827502be-1bad-4761-be20-f8b4bc19f05e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f559954f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9feeca9ca120d7b56f15ee996b1b4a93d726c440d241df83466babe4098ab4ab", Pod:"calico-kube-controllers-74f559954f-b2z2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibc973f42a64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.405 [INFO][4773] k8s.go 608: Cleaning up netns ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.405 [INFO][4773] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" iface="eth0" netns="" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.405 [INFO][4773] k8s.go 615: Releasing IP address(es) ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.405 [INFO][4773] utils.go 188: Calico CNI releasing IP address ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.429 [INFO][4780] ipam_plugin.go 417: Releasing address using handleID ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.429 [INFO][4780] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.429 [INFO][4780] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.434 [WARNING][4780] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.434 [INFO][4780] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" HandleID="k8s-pod-network.4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Workload="localhost-k8s-calico--kube--controllers--74f559954f--b2z2l-eth0" Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.435 [INFO][4780] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.439994 containerd[1457]: 2024-09-04 17:20:58.437 [INFO][4773] k8s.go 621: Teardown processing complete. ContainerID="4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec" Sep 4 17:20:58.439994 containerd[1457]: time="2024-09-04T17:20:58.439967403Z" level=info msg="TearDown network for sandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\" successfully" Sep 4 17:20:58.444236 containerd[1457]: time="2024-09-04T17:20:58.444201641Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:58.444301 containerd[1457]: time="2024-09-04T17:20:58.444273570Z" level=info msg="RemovePodSandbox \"4f956c604093b0bfb92c2f8815487c1238b6ac0e897f7572b301ba1c45096dec\" returns successfully" Sep 4 17:20:58.444771 containerd[1457]: time="2024-09-04T17:20:58.444747297Z" level=info msg="StopPodSandbox for \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\"" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.477 [WARNING][4803] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wjf2b-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3ddb733d-9464-44f3-b6e4-26ba87cd5114", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d", Pod:"coredns-5dd5756b68-wjf2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd59f146ff0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.478 [INFO][4803] k8s.go 608: Cleaning up netns ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.478 [INFO][4803] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" iface="eth0" netns="" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.478 [INFO][4803] k8s.go 615: Releasing IP address(es) ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.478 [INFO][4803] utils.go 188: Calico CNI releasing IP address ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.497 [INFO][4811] ipam_plugin.go 417: Releasing address using handleID ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.497 [INFO][4811] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.497 [INFO][4811] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.502 [WARNING][4811] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.502 [INFO][4811] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.503 [INFO][4811] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.507598 containerd[1457]: 2024-09-04 17:20:58.505 [INFO][4803] k8s.go 621: Teardown processing complete. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.508043 containerd[1457]: time="2024-09-04T17:20:58.507635460Z" level=info msg="TearDown network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\" successfully" Sep 4 17:20:58.508043 containerd[1457]: time="2024-09-04T17:20:58.507661760Z" level=info msg="StopPodSandbox for \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\" returns successfully" Sep 4 17:20:58.508144 containerd[1457]: time="2024-09-04T17:20:58.508110619Z" level=info msg="RemovePodSandbox for \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\"" Sep 4 17:20:58.508144 containerd[1457]: time="2024-09-04T17:20:58.508147050Z" level=info msg="Forcibly stopping sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\"" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.538 [WARNING][4833] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--wjf2b-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"3ddb733d-9464-44f3-b6e4-26ba87cd5114", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1154b0f459b89bfcfb731a3e00972053d5329f4492897f5b4a7d83f66e62db8d", Pod:"coredns-5dd5756b68-wjf2b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd59f146ff0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.538 [INFO][4833] k8s.go 608: Cleaning up netns ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.538 [INFO][4833] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" iface="eth0" netns="" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.538 [INFO][4833] k8s.go 615: Releasing IP address(es) ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.538 [INFO][4833] utils.go 188: Calico CNI releasing IP address ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.557 [INFO][4841] ipam_plugin.go 417: Releasing address using handleID ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.557 [INFO][4841] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.557 [INFO][4841] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.562 [WARNING][4841] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.562 [INFO][4841] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" HandleID="k8s-pod-network.9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Workload="localhost-k8s-coredns--5dd5756b68--wjf2b-eth0" Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.563 [INFO][4841] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.568139 containerd[1457]: 2024-09-04 17:20:58.565 [INFO][4833] k8s.go 621: Teardown processing complete. ContainerID="9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12" Sep 4 17:20:58.568795 containerd[1457]: time="2024-09-04T17:20:58.568203871Z" level=info msg="TearDown network for sandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\" successfully" Sep 4 17:20:58.582435 containerd[1457]: time="2024-09-04T17:20:58.582401669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:58.582508 containerd[1457]: time="2024-09-04T17:20:58.582448198Z" level=info msg="RemovePodSandbox \"9389327500dbdbb49b769a36336f01a803524c3f3dd72e91e20d8ed487496d12\" returns successfully" Sep 4 17:20:58.582925 containerd[1457]: time="2024-09-04T17:20:58.582882710Z" level=info msg="StopPodSandbox for \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\"" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.612 [WARNING][4863] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bclg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"63225d84-91a9-409e-b445-bac344cc3e0c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498", Pod:"coredns-5dd5756b68-bclg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24efe6a85f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.613 [INFO][4863] k8s.go 608: Cleaning up netns ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.613 [INFO][4863] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" iface="eth0" netns="" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.613 [INFO][4863] k8s.go 615: Releasing IP address(es) ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.613 [INFO][4863] utils.go 188: Calico CNI releasing IP address ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.631 [INFO][4871] ipam_plugin.go 417: Releasing address using handleID ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.631 [INFO][4871] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.631 [INFO][4871] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.636 [WARNING][4871] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.636 [INFO][4871] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.637 [INFO][4871] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.642201 containerd[1457]: 2024-09-04 17:20:58.639 [INFO][4863] k8s.go 621: Teardown processing complete. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.642760 containerd[1457]: time="2024-09-04T17:20:58.642236310Z" level=info msg="TearDown network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\" successfully" Sep 4 17:20:58.642760 containerd[1457]: time="2024-09-04T17:20:58.642267109Z" level=info msg="StopPodSandbox for \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\" returns successfully" Sep 4 17:20:58.642760 containerd[1457]: time="2024-09-04T17:20:58.642730115Z" level=info msg="RemovePodSandbox for \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\"" Sep 4 17:20:58.642877 containerd[1457]: time="2024-09-04T17:20:58.642762338Z" level=info msg="Forcibly stopping sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\"" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.673 [WARNING][4894] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bclg9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"63225d84-91a9-409e-b445-bac344cc3e0c", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bef9b91f4a16d44f1b9bdfb6ce1c3b691700f8957597df6866e86d99d843498", Pod:"coredns-5dd5756b68-bclg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24efe6a85f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.673 [INFO][4894] k8s.go 608: Cleaning up netns ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.673 [INFO][4894] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" iface="eth0" netns="" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.673 [INFO][4894] k8s.go 615: Releasing IP address(es) ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.673 [INFO][4894] utils.go 188: Calico CNI releasing IP address ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.691 [INFO][4901] ipam_plugin.go 417: Releasing address using handleID ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.692 [INFO][4901] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.692 [INFO][4901] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.696 [WARNING][4901] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.697 [INFO][4901] ipam_plugin.go 445: Releasing address using workloadID ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" HandleID="k8s-pod-network.15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Workload="localhost-k8s-coredns--5dd5756b68--bclg9-eth0" Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.698 [INFO][4901] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:58.703427 containerd[1457]: 2024-09-04 17:20:58.700 [INFO][4894] k8s.go 621: Teardown processing complete. ContainerID="15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9" Sep 4 17:20:58.703427 containerd[1457]: time="2024-09-04T17:20:58.703389874Z" level=info msg="TearDown network for sandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\" successfully" Sep 4 17:20:58.710711 containerd[1457]: time="2024-09-04T17:20:58.710656743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:20:58.710890 containerd[1457]: time="2024-09-04T17:20:58.710740155Z" level=info msg="RemovePodSandbox \"15bd1a849cb6a3a21e76d8eed692d0f4e835d389dc742af8ced141402ba777e9\" returns successfully" Sep 4 17:21:00.652011 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:59426.service - OpenSSH per-connection server daemon (10.0.0.1:59426). Sep 4 17:21:00.692321 sshd[4910]: Accepted publickey for core from 10.0.0.1 port 59426 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:00.694018 sshd[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:00.698148 systemd-logind[1440]: New session 16 of user core. Sep 4 17:21:00.710025 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:21:00.818331 sshd[4910]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:00.821236 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:59426.service: Deactivated successfully. Sep 4 17:21:00.823205 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:21:00.824612 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:21:00.825502 systemd-logind[1440]: Removed session 16. Sep 4 17:21:05.839139 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:35180.service - OpenSSH per-connection server daemon (10.0.0.1:35180). Sep 4 17:21:05.870594 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 35180 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:05.872198 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:05.876528 systemd-logind[1440]: New session 17 of user core. Sep 4 17:21:05.885945 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:21:05.994548 sshd[4926]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:05.999020 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:35180.service: Deactivated successfully. Sep 4 17:21:06.001544 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:21:06.002449 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:21:06.003650 systemd-logind[1440]: Removed session 17. Sep 4 17:21:11.009708 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:35188.service - OpenSSH per-connection server daemon (10.0.0.1:35188). Sep 4 17:21:11.040410 sshd[4967]: Accepted publickey for core from 10.0.0.1 port 35188 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:11.042097 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:11.046102 systemd-logind[1440]: New session 18 of user core. Sep 4 17:21:11.054957 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:21:11.156145 sshd[4967]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:11.164663 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:35188.service: Deactivated successfully. Sep 4 17:21:11.166592 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:21:11.168076 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:21:11.175052 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:35200.service - OpenSSH per-connection server daemon (10.0.0.1:35200). Sep 4 17:21:11.175910 systemd-logind[1440]: Removed session 18. Sep 4 17:21:11.201468 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 35200 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:11.202964 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:11.206678 systemd-logind[1440]: New session 19 of user core. Sep 4 17:21:11.211930 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:21:11.539711 sshd[4982]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:11.551549 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:35200.service: Deactivated successfully. Sep 4 17:21:11.553166 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:21:11.554726 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:21:11.556085 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:35212.service - OpenSSH per-connection server daemon (10.0.0.1:35212). Sep 4 17:21:11.557442 systemd-logind[1440]: Removed session 19. Sep 4 17:21:11.591208 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 35212 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:11.592891 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:11.596785 systemd-logind[1440]: New session 20 of user core. Sep 4 17:21:11.601926 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:21:12.552219 sshd[4994]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:12.560795 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:35212.service: Deactivated successfully. Sep 4 17:21:12.562621 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:21:12.564021 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:21:12.572105 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:35226.service - OpenSSH per-connection server daemon (10.0.0.1:35226). Sep 4 17:21:12.572889 systemd-logind[1440]: Removed session 20. Sep 4 17:21:12.623831 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 35226 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:12.625249 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:12.629333 systemd-logind[1440]: New session 21 of user core. Sep 4 17:21:12.641014 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:21:12.979359 sshd[5017]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:12.993947 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:35226.service: Deactivated successfully. Sep 4 17:21:12.996012 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:21:12.997888 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:21:13.009129 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:35228.service - OpenSSH per-connection server daemon (10.0.0.1:35228). Sep 4 17:21:13.010074 systemd-logind[1440]: Removed session 21. Sep 4 17:21:13.037337 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 35228 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:13.039167 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:13.043368 systemd-logind[1440]: New session 22 of user core. Sep 4 17:21:13.050123 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:21:13.165646 sshd[5030]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:13.170019 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:35228.service: Deactivated successfully. Sep 4 17:21:13.172025 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:21:13.172687 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:21:13.173565 systemd-logind[1440]: Removed session 22. Sep 4 17:21:13.897210 kubelet[2519]: E0904 17:21:13.897155 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:18.182922 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:33016.service - OpenSSH per-connection server daemon (10.0.0.1:33016). Sep 4 17:21:18.213588 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 33016 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:18.214921 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:18.218454 systemd-logind[1440]: New session 23 of user core. Sep 4 17:21:18.225952 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:21:18.329973 sshd[5055]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:18.333547 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:33016.service: Deactivated successfully. Sep 4 17:21:18.335606 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:21:18.336301 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:21:18.337237 systemd-logind[1440]: Removed session 23. Sep 4 17:21:22.771887 kubelet[2519]: E0904 17:21:22.771855 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:23.343323 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020). Sep 4 17:21:23.384919 sshd[5093]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:23.386885 sshd[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:23.391828 systemd-logind[1440]: New session 24 of user core. Sep 4 17:21:23.403060 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:21:23.513369 sshd[5093]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:23.518075 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:33020.service: Deactivated successfully. Sep 4 17:21:23.520203 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:21:23.521250 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:21:23.522478 systemd-logind[1440]: Removed session 24. Sep 4 17:21:25.509626 kubelet[2519]: I0904 17:21:25.507832 2519 topology_manager.go:215] "Topology Admit Handler" podUID="8d723f91-bc42-4ae5-b58e-4b7e476fa7ea" podNamespace="calico-apiserver" podName="calico-apiserver-6499889468-kfxr8" Sep 4 17:21:25.558498 systemd[1]: Created slice kubepods-besteffort-pod8d723f91_bc42_4ae5_b58e_4b7e476fa7ea.slice - libcontainer container kubepods-besteffort-pod8d723f91_bc42_4ae5_b58e_4b7e476fa7ea.slice. Sep 4 17:21:25.633868 kubelet[2519]: I0904 17:21:25.633796 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d723f91-bc42-4ae5-b58e-4b7e476fa7ea-calico-apiserver-certs\") pod \"calico-apiserver-6499889468-kfxr8\" (UID: \"8d723f91-bc42-4ae5-b58e-4b7e476fa7ea\") " pod="calico-apiserver/calico-apiserver-6499889468-kfxr8" Sep 4 17:21:25.634050 kubelet[2519]: I0904 17:21:25.633887 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfw6\" (UniqueName: \"kubernetes.io/projected/8d723f91-bc42-4ae5-b58e-4b7e476fa7ea-kube-api-access-ntfw6\") pod \"calico-apiserver-6499889468-kfxr8\" (UID: \"8d723f91-bc42-4ae5-b58e-4b7e476fa7ea\") " pod="calico-apiserver/calico-apiserver-6499889468-kfxr8" Sep 4 17:21:25.741273 kubelet[2519]: E0904 17:21:25.740426 2519 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:21:25.744192 kubelet[2519]: E0904 17:21:25.743869 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d723f91-bc42-4ae5-b58e-4b7e476fa7ea-calico-apiserver-certs podName:8d723f91-bc42-4ae5-b58e-4b7e476fa7ea nodeName:}" failed. No retries permitted until 2024-09-04 17:21:26.240490609 +0000 UTC m=+88.444864491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8d723f91-bc42-4ae5-b58e-4b7e476fa7ea-calico-apiserver-certs") pod "calico-apiserver-6499889468-kfxr8" (UID: "8d723f91-bc42-4ae5-b58e-4b7e476fa7ea") : secret "calico-apiserver-certs" not found Sep 4 17:21:26.473221 containerd[1457]: time="2024-09-04T17:21:26.472621749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6499889468-kfxr8,Uid:8d723f91-bc42-4ae5-b58e-4b7e476fa7ea,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:21:26.967925 systemd-networkd[1396]: calie6e4863ab89: Link UP Sep 4 17:21:26.977170 systemd-networkd[1396]: calie6e4863ab89: Gained carrier Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.625 [INFO][5119] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0 calico-apiserver-6499889468- calico-apiserver 8d723f91-bc42-4ae5-b58e-4b7e476fa7ea 1141 0 2024-09-04 17:21:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6499889468 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6499889468-kfxr8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie6e4863ab89 [] []}} ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.625 [INFO][5119] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.721 [INFO][5127] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" HandleID="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Workload="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.760 [INFO][5127] ipam_plugin.go 270: Auto assigning IP ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" HandleID="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Workload="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6499889468-kfxr8", "timestamp":"2024-09-04 17:21:26.721188207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.760 [INFO][5127] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.760 [INFO][5127] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.760 [INFO][5127] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.775 [INFO][5127] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.808 [INFO][5127] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.837 [INFO][5127] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.851 [INFO][5127] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.869 [INFO][5127] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.869 [INFO][5127] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.891 [INFO][5127] ipam.go 1685: Creating new handle: k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41 Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.918 [INFO][5127] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.939 [INFO][5127] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.939 [INFO][5127] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" host="localhost" Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.939 [INFO][5127] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:21:27.023803 containerd[1457]: 2024-09-04 17:21:26.939 [INFO][5127] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" HandleID="k8s-pod-network.8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Workload="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" Sep 4 17:21:27.024668 containerd[1457]: 2024-09-04 17:21:26.959 [INFO][5119] k8s.go 386: Populated endpoint ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0", GenerateName:"calico-apiserver-6499889468-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d723f91-bc42-4ae5-b58e-4b7e476fa7ea", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6499889468", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6499889468-kfxr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6e4863ab89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:27.024668 containerd[1457]: 2024-09-04 17:21:26.960 [INFO][5119] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" Sep 4 17:21:27.024668 containerd[1457]: 2024-09-04 17:21:26.960 [INFO][5119] dataplane_linux.go 68: Setting the host side veth name to calie6e4863ab89 ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" Sep 4 17:21:27.024668 containerd[1457]: 2024-09-04 17:21:26.962 [INFO][5119] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" Sep 4 17:21:27.024668 containerd[1457]: 2024-09-04 17:21:26.963 [INFO][5119] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0", GenerateName:"calico-apiserver-6499889468-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d723f91-bc42-4ae5-b58e-4b7e476fa7ea", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6499889468", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41", Pod:"calico-apiserver-6499889468-kfxr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie6e4863ab89", MAC:"5a:8f:32:3d:98:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:21:27.024668 containerd[1457]: 2024-09-04 17:21:27.003 [INFO][5119] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41" Namespace="calico-apiserver" Pod="calico-apiserver-6499889468-kfxr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6499889468--kfxr8-eth0" Sep 4 17:21:27.213284 containerd[1457]: time="2024-09-04T17:21:27.213106215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:21:27.213540 containerd[1457]: time="2024-09-04T17:21:27.213493204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:27.213706 containerd[1457]: time="2024-09-04T17:21:27.213652628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:21:27.213899 containerd[1457]: time="2024-09-04T17:21:27.213856588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:21:27.296101 systemd[1]: run-containerd-runc-k8s.io-8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41-runc.j1Lj3L.mount: Deactivated successfully. Sep 4 17:21:27.352749 systemd[1]: Started cri-containerd-8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41.scope - libcontainer container 8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41. Sep 4 17:21:27.417723 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:21:27.498110 containerd[1457]: time="2024-09-04T17:21:27.497640106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6499889468-kfxr8,Uid:8d723f91-bc42-4ae5-b58e-4b7e476fa7ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41\"" Sep 4 17:21:27.507540 containerd[1457]: time="2024-09-04T17:21:27.507458835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:21:27.912624 kernel: hrtimer: interrupt took 10900907 ns Sep 4 17:21:28.568407 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:51606.service - OpenSSH per-connection server daemon (10.0.0.1:51606). Sep 4 17:21:28.593600 systemd-networkd[1396]: calie6e4863ab89: Gained IPv6LL Sep 4 17:21:28.767645 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 51606 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:28.772972 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:28.786717 systemd-logind[1440]: New session 25 of user core. Sep 4 17:21:28.797153 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:21:29.114207 sshd[5198]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:29.128009 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:51606.service: Deactivated successfully. Sep 4 17:21:29.145608 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:21:29.156928 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:21:29.159252 systemd-logind[1440]: Removed session 25. Sep 4 17:21:29.900212 kubelet[2519]: E0904 17:21:29.900118 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:33.878650 containerd[1457]: time="2024-09-04T17:21:33.877618087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:33.899823 containerd[1457]: time="2024-09-04T17:21:33.897671390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Sep 4 17:21:33.924332 containerd[1457]: time="2024-09-04T17:21:33.921577137Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:33.938871 containerd[1457]: time="2024-09-04T17:21:33.936940258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:21:33.938871 containerd[1457]: time="2024-09-04T17:21:33.938658663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 6.431150645s" Sep 4 17:21:33.938871 containerd[1457]: time="2024-09-04T17:21:33.938724488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Sep 4 17:21:33.945354 containerd[1457]: time="2024-09-04T17:21:33.944369251Z" level=info msg="CreateContainer within sandbox \"8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:21:34.016639 containerd[1457]: time="2024-09-04T17:21:34.012370293Z" level=info msg="CreateContainer within sandbox \"8b12caea62625af0612250b401333fa2cf77bce304c7e4f1226bc6512a35cf41\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8ee1565d76cb74ee1851835da89b2f65ffeb86d3545f81889aa57290ac1c1c30\"" Sep 4 17:21:34.019119 containerd[1457]: time="2024-09-04T17:21:34.017117343Z" level=info msg="StartContainer for \"8ee1565d76cb74ee1851835da89b2f65ffeb86d3545f81889aa57290ac1c1c30\"" Sep 4 17:21:34.174885 systemd[1]: Started cri-containerd-8ee1565d76cb74ee1851835da89b2f65ffeb86d3545f81889aa57290ac1c1c30.scope - libcontainer container 8ee1565d76cb74ee1851835da89b2f65ffeb86d3545f81889aa57290ac1c1c30. Sep 4 17:21:34.217555 systemd[1]: Started sshd@25-10.0.0.43:22-10.0.0.1:51616.service - OpenSSH per-connection server daemon (10.0.0.1:51616). Sep 4 17:21:34.380468 containerd[1457]: time="2024-09-04T17:21:34.380321053Z" level=info msg="StartContainer for \"8ee1565d76cb74ee1851835da89b2f65ffeb86d3545f81889aa57290ac1c1c30\" returns successfully" Sep 4 17:21:34.409947 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 51616 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:34.413227 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:34.447896 systemd-logind[1440]: New session 26 of user core. Sep 4 17:21:34.466170 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:21:34.796665 systemd[1]: sshd@25-10.0.0.43:22-10.0.0.1:51616.service: Deactivated successfully. Sep 4 17:21:34.784557 sshd[5247]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:34.801582 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:21:34.812137 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:21:34.825783 systemd-logind[1440]: Removed session 26. Sep 4 17:21:34.900976 kubelet[2519]: E0904 17:21:34.900450 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:21:35.213732 kubelet[2519]: I0904 17:21:35.213289 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6499889468-kfxr8" podStartSLOduration=3.781149659 podCreationTimestamp="2024-09-04 17:21:25 +0000 UTC" firstStartedPulling="2024-09-04 17:21:27.507216533 +0000 UTC m=+89.711590415" lastFinishedPulling="2024-09-04 17:21:33.939285678 +0000 UTC m=+96.143659560" observedRunningTime="2024-09-04 17:21:35.1803303 +0000 UTC m=+97.384704182" watchObservedRunningTime="2024-09-04 17:21:35.213218804 +0000 UTC m=+97.417592696" Sep 4 17:21:39.813548 systemd[1]: Started sshd@26-10.0.0.43:22-10.0.0.1:57214.service - OpenSSH per-connection server daemon (10.0.0.1:57214). Sep 4 17:21:39.930844 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 57214 ssh2: RSA SHA256:bMjcccmFE5G91IKdaJGL1wI8ShAH+BtWSJoLKyofUd8 Sep 4 17:21:39.935281 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:39.982972 systemd-logind[1440]: New session 27 of user core. Sep 4 17:21:39.997329 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:21:40.256640 sshd[5289]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:40.264081 systemd[1]: sshd@26-10.0.0.43:22-10.0.0.1:57214.service: Deactivated successfully. Sep 4 17:21:40.269248 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:21:40.273841 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:21:40.279099 systemd-logind[1440]: Removed session 27. Sep 4 17:21:40.896801 kubelet[2519]: E0904 17:21:40.896738 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"