Jun 25 18:43:50.947415 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:43:50.947436 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:50.947447 kernel: BIOS-provided physical RAM map: Jun 25 18:43:50.947454 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 18:43:50.947460 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jun 25 18:43:50.947466 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jun 25 18:43:50.947474 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jun 25 18:43:50.947480 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jun 25 18:43:50.947487 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jun 25 18:43:50.947493 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jun 25 18:43:50.947502 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jun 25 18:43:50.947508 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jun 25 18:43:50.947514 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jun 25 18:43:50.947521 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jun 25 18:43:50.947529 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jun 25 18:43:50.947538 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jun 25 18:43:50.947545 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jun 25 18:43:50.947552 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jun 25 18:43:50.947558 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jun 25 18:43:50.947565 kernel: NX (Execute Disable) protection: active Jun 25 18:43:50.947572 kernel: APIC: Static calls initialized Jun 25 18:43:50.947579 kernel: efi: EFI v2.7 by EDK II Jun 25 18:43:50.947586 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b5ef418 Jun 25 18:43:50.947593 kernel: SMBIOS 2.8 present. Jun 25 18:43:50.947600 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jun 25 18:43:50.947606 kernel: Hypervisor detected: KVM Jun 25 18:43:50.947613 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:43:50.947622 kernel: kvm-clock: using sched offset of 4473109205 cycles Jun 25 18:43:50.947629 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:43:50.947637 kernel: tsc: Detected 2794.750 MHz processor Jun 25 18:43:50.947644 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:43:50.947651 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:43:50.947659 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jun 25 18:43:50.947666 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 25 18:43:50.947673 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:43:50.947680 kernel: Using GB pages for direct mapping Jun 25 18:43:50.947690 kernel: Secure boot disabled Jun 25 18:43:50.947697 kernel: ACPI: Early table checksum verification disabled Jun 25 18:43:50.947704 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jun 25 18:43:50.947711 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jun 25 18:43:50.947721 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:50.947736 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:50.947746 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jun 25 18:43:50.947753 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:50.947760 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:50.947768 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 18:43:50.947775 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jun 25 18:43:50.947782 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jun 25 18:43:50.947789 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jun 25 18:43:50.947796 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jun 25 18:43:50.947806 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jun 25 18:43:50.947813 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jun 25 18:43:50.947820 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jun 25 18:43:50.947827 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jun 25 18:43:50.947834 kernel: No NUMA configuration found Jun 25 18:43:50.947841 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jun 25 18:43:50.947849 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jun 25 18:43:50.947856 kernel: Zone ranges: Jun 25 18:43:50.947863 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:43:50.947884 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jun 25 18:43:50.947891 kernel: Normal empty Jun 25 18:43:50.947898 kernel: Movable zone start for each node Jun 25 18:43:50.947905 kernel: Early memory node ranges Jun 25 18:43:50.947912 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 18:43:50.947919 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jun 25 18:43:50.947927 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jun 25 18:43:50.947934 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jun 25 18:43:50.947941 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jun 25 18:43:50.947948 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jun 25 18:43:50.947957 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jun 25 18:43:50.947965 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:43:50.947972 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 18:43:50.947979 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jun 25 18:43:50.947987 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:43:50.947994 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jun 25 18:43:50.948001 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jun 25 18:43:50.948009 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jun 25 18:43:50.948016 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 18:43:50.948025 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:43:50.948033 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 18:43:50.948040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 18:43:50.948047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:43:50.948055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:43:50.948062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:43:50.948069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:43:50.948076 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:43:50.948084 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:43:50.948093 kernel: TSC deadline timer available Jun 25 18:43:50.948100 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 18:43:50.948108 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:43:50.948115 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 18:43:50.948122 kernel: kvm-guest: setup PV sched yield Jun 25 18:43:50.948129 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jun 25 18:43:50.948137 kernel: Booting paravirtualized kernel on KVM Jun 25 18:43:50.948144 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:43:50.948152 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 18:43:50.948162 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Jun 25 18:43:50.948169 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Jun 25 18:43:50.948176 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 18:43:50.948183 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:43:50.948190 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:43:50.948199 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:50.948207 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:43:50.948214 kernel: random: crng init done Jun 25 18:43:50.948223 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:43:50.948231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:43:50.948238 kernel: Fallback order for Node 0: 0 Jun 25 18:43:50.948245 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jun 25 18:43:50.948253 kernel: Policy zone: DMA32 Jun 25 18:43:50.948260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:43:50.948267 kernel: Memory: 2395516K/2567000K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 171224K reserved, 0K cma-reserved) Jun 25 18:43:50.948275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 18:43:50.948282 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:43:50.948292 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:43:50.948299 kernel: Dynamic Preempt: voluntary Jun 25 18:43:50.948306 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:43:50.948318 kernel: rcu: RCU event tracing is enabled. Jun 25 18:43:50.948325 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 18:43:50.948342 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:43:50.948350 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:43:50.948358 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:43:50.948365 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:43:50.948373 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 18:43:50.948380 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 18:43:50.948388 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:43:50.948395 kernel: Console: colour dummy device 80x25 Jun 25 18:43:50.948405 kernel: printk: console [ttyS0] enabled Jun 25 18:43:50.948413 kernel: ACPI: Core revision 20230628 Jun 25 18:43:50.948421 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 18:43:50.948428 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:43:50.948436 kernel: x2apic enabled Jun 25 18:43:50.948446 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:43:50.948453 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 25 18:43:50.948461 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 25 18:43:50.948469 kernel: kvm-guest: setup PV IPIs Jun 25 18:43:50.948476 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 18:43:50.948484 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 18:43:50.948491 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 18:43:50.948499 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 18:43:50.948506 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 18:43:50.948516 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 18:43:50.948524 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:43:50.948531 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:43:50.948539 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:43:50.948547 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:43:50.948554 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 18:43:50.948562 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 18:43:50.948569 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 18:43:50.948579 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 18:43:50.948587 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 25 18:43:50.948595 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 25 18:43:50.948603 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 25 18:43:50.948611 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:43:50.948618 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:43:50.948626 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:43:50.948634 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:43:50.948641 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 18:43:50.948651 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:43:50.948659 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:43:50.948667 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:43:50.948674 kernel: SELinux: Initializing. Jun 25 18:43:50.948682 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:43:50.948690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:43:50.948698 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 18:43:50.948705 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:50.948713 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:50.948723 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:50.948737 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 18:43:50.948745 kernel: ... version: 0 Jun 25 18:43:50.948753 kernel: ... bit width: 48 Jun 25 18:43:50.948760 kernel: ... generic registers: 6 Jun 25 18:43:50.948768 kernel: ... value mask: 0000ffffffffffff Jun 25 18:43:50.948776 kernel: ... max period: 00007fffffffffff Jun 25 18:43:50.948783 kernel: ... fixed-purpose events: 0 Jun 25 18:43:50.948791 kernel: ... event mask: 000000000000003f Jun 25 18:43:50.948801 kernel: signal: max sigframe size: 1776 Jun 25 18:43:50.948808 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:43:50.948816 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:43:50.948823 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:43:50.948831 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:43:50.948838 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 18:43:50.948846 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 18:43:50.948854 kernel: smpboot: Max logical packages: 1 Jun 25 18:43:50.948861 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 18:43:50.948881 kernel: devtmpfs: initialized Jun 25 18:43:50.948888 kernel: x86/mm: Memory block size: 128MB Jun 25 18:43:50.948896 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jun 25 18:43:50.948904 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jun 25 18:43:50.948912 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jun 25 18:43:50.948919 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jun 25 18:43:50.948927 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jun 25 18:43:50.948935 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:43:50.948943 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 18:43:50.948952 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:43:50.948960 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:43:50.948968 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:43:50.948975 kernel: audit: type=2000 audit(1719341030.193:1): state=initialized audit_enabled=0 res=1 Jun 25 18:43:50.948983 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:43:50.948991 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:43:50.948998 kernel: cpuidle: using governor menu Jun 25 18:43:50.949006 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:43:50.949014 kernel: dca service started, version 1.12.1 Jun 25 18:43:50.949024 kernel: PCI: Using configuration type 1 for base access Jun 25 18:43:50.949032 kernel: PCI: Using configuration type 1 for extended access Jun 25 18:43:50.949039 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:43:50.949047 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:43:50.949055 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:43:50.949062 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:43:50.949070 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:43:50.949078 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:43:50.949086 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:43:50.949097 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:43:50.949106 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:43:50.949115 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:43:50.949123 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:43:50.949130 kernel: ACPI: Interpreter enabled Jun 25 18:43:50.949138 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 18:43:50.949145 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:43:50.949153 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:43:50.949161 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:43:50.949171 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 18:43:50.949178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:43:50.949353 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:43:50.949365 kernel: acpiphp: Slot [3] registered Jun 25 18:43:50.949373 kernel: acpiphp: Slot [4] registered Jun 25 18:43:50.949380 kernel: acpiphp: Slot [5] registered Jun 25 18:43:50.949388 kernel: acpiphp: Slot [6] registered Jun 25 18:43:50.949396 kernel: acpiphp: Slot [7] registered Jun 25 18:43:50.949406 kernel: acpiphp: Slot [8] registered Jun 25 18:43:50.949414 kernel: acpiphp: Slot [9] registered Jun 25 18:43:50.949421 kernel: acpiphp: Slot [10] registered Jun 25 18:43:50.949428 kernel: acpiphp: Slot [11] registered Jun 25 18:43:50.949436 kernel: acpiphp: Slot [12] registered Jun 25 18:43:50.949444 kernel: acpiphp: Slot [13] registered Jun 25 18:43:50.949451 kernel: acpiphp: Slot [14] registered Jun 25 18:43:50.949458 kernel: acpiphp: Slot [15] registered Jun 25 18:43:50.949466 kernel: acpiphp: Slot [16] registered Jun 25 18:43:50.949475 kernel: acpiphp: Slot [17] registered Jun 25 18:43:50.949483 kernel: acpiphp: Slot [18] registered Jun 25 18:43:50.949490 kernel: acpiphp: Slot [19] registered Jun 25 18:43:50.949498 kernel: acpiphp: Slot [20] registered Jun 25 18:43:50.949505 kernel: acpiphp: Slot [21] registered Jun 25 18:43:50.949513 kernel: acpiphp: Slot [22] registered Jun 25 18:43:50.949520 kernel: acpiphp: Slot [23] registered Jun 25 18:43:50.949527 kernel: acpiphp: Slot [24] registered Jun 25 18:43:50.949535 kernel: acpiphp: Slot [25] registered Jun 25 18:43:50.949542 kernel: acpiphp: Slot [26] registered Jun 25 18:43:50.949552 kernel: acpiphp: Slot [27] registered Jun 25 18:43:50.949559 kernel: acpiphp: Slot [28] registered Jun 25 18:43:50.949567 kernel: acpiphp: Slot [29] registered Jun 25 18:43:50.949574 kernel: acpiphp: Slot [30] registered Jun 25 18:43:50.949582 kernel: acpiphp: Slot [31] registered Jun 25 18:43:50.949589 kernel: PCI host bridge to bus 0000:00 Jun 25 18:43:50.949719 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:43:50.949840 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:43:50.949965 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:43:50.950074 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 18:43:50.950186 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jun 25 18:43:50.950297 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:43:50.950440 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:43:50.950608 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:43:50.950766 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 18:43:50.950908 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 18:43:50.951031 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 18:43:50.951151 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 18:43:50.951273 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 18:43:50.951392 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 18:43:50.951520 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 18:43:50.951645 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 18:43:50.951774 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 18:43:50.951914 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 18:43:50.952035 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jun 25 18:43:50.952157 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jun 25 18:43:50.952276 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jun 25 18:43:50.952393 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jun 25 18:43:50.952515 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:43:50.952643 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 18:43:50.952773 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 18:43:50.952914 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jun 25 18:43:50.953036 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jun 25 18:43:50.953176 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 18:43:50.953326 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 18:43:50.953457 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jun 25 18:43:50.953576 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jun 25 18:43:50.953709 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 18:43:50.953852 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 18:43:50.953987 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jun 25 18:43:50.954112 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jun 25 18:43:50.954248 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jun 25 18:43:50.954271 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:43:50.954283 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:43:50.954294 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:43:50.954302 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:43:50.954310 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:43:50.954318 kernel: iommu: Default domain type: Translated Jun 25 18:43:50.954325 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:43:50.954333 kernel: efivars: Registered efivars operations Jun 25 18:43:50.954340 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:43:50.954352 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:43:50.954359 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jun 25 18:43:50.954367 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jun 25 18:43:50.954375 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jun 25 18:43:50.954382 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jun 25 18:43:50.954517 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 18:43:50.954638 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 18:43:50.954766 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:43:50.954779 kernel: vgaarb: loaded Jun 25 18:43:50.954787 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 18:43:50.954795 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 18:43:50.954803 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:43:50.954810 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:43:50.954818 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:43:50.954826 kernel: pnp: PnP ACPI init Jun 25 18:43:50.954965 kernel: pnp 00:02: [dma 2] Jun 25 18:43:50.954977 kernel: pnp: PnP ACPI: found 6 devices Jun 25 18:43:50.954988 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:43:50.954995 kernel: NET: Registered PF_INET protocol family Jun 25 18:43:50.955003 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:43:50.955011 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:43:50.955019 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:43:50.955026 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:43:50.955034 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:43:50.955042 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:43:50.955052 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:43:50.955059 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:43:50.955067 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:43:50.955074 kernel: NET: Registered PF_XDP protocol family Jun 25 18:43:50.955201 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jun 25 18:43:50.955328 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jun 25 18:43:50.955443 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:43:50.955551 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:43:50.955665 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:43:50.955784 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 18:43:50.955905 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jun 25 18:43:50.956027 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 18:43:50.956147 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:43:50.956158 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:43:50.956166 kernel: Initialise system trusted keyrings Jun 25 18:43:50.956173 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:43:50.956184 kernel: Key type asymmetric registered Jun 25 18:43:50.956192 kernel: Asymmetric key parser 'x509' registered Jun 25 18:43:50.956200 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:43:50.956207 kernel: io scheduler mq-deadline registered Jun 25 18:43:50.956215 kernel: io scheduler kyber registered Jun 25 18:43:50.956223 kernel: io scheduler bfq registered Jun 25 18:43:50.956230 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:43:50.956239 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 18:43:50.956246 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 18:43:50.956254 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 18:43:50.956264 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:43:50.956272 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:43:50.956281 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:43:50.956305 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:43:50.956315 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:43:50.956512 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 18:43:50.956525 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:43:50.956637 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 18:43:50.956766 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T18:43:50 UTC (1719341030) Jun 25 18:43:50.956939 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 18:43:50.956950 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 25 18:43:50.956958 kernel: efifb: probing for efifb Jun 25 18:43:50.956966 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jun 25 18:43:50.956974 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jun 25 18:43:50.956982 kernel: efifb: scrolling: redraw Jun 25 18:43:50.956991 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jun 25 18:43:50.957002 kernel: Console: switching to colour frame buffer device 100x37 Jun 25 18:43:50.957010 kernel: fb0: EFI VGA frame buffer device Jun 25 18:43:50.957019 kernel: pstore: Using crash dump compression: deflate Jun 25 18:43:50.957027 kernel: pstore: Registered efi_pstore as persistent store backend Jun 25 18:43:50.957035 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:43:50.957043 kernel: Segment Routing with IPv6 Jun 25 18:43:50.957051 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:43:50.957059 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:43:50.957067 kernel: Key type dns_resolver registered Jun 25 18:43:50.957075 kernel: IPI shorthand broadcast: enabled Jun 25 18:43:50.957086 kernel: sched_clock: Marking stable (667002374, 113956258)->(844693660, -63735028) Jun 25 18:43:50.957097 kernel: registered taskstats version 1 Jun 25 18:43:50.957105 kernel: Loading compiled-in X.509 certificates Jun 25 18:43:50.957113 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:43:50.957121 kernel: Key type .fscrypt registered Jun 25 18:43:50.957134 kernel: Key type fscrypt-provisioning registered Jun 25 18:43:50.957142 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:43:50.957150 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:43:50.957158 kernel: ima: No architecture policies found Jun 25 18:43:50.957167 kernel: clk: Disabling unused clocks Jun 25 18:43:50.957175 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:43:50.957183 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:43:50.957191 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:43:50.957199 kernel: Run /init as init process Jun 25 18:43:50.957209 kernel: with arguments: Jun 25 18:43:50.957217 kernel: /init Jun 25 18:43:50.957225 kernel: with environment: Jun 25 18:43:50.957233 kernel: HOME=/ Jun 25 18:43:50.957241 kernel: TERM=linux Jun 25 18:43:50.957251 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:43:50.957264 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:50.957279 systemd[1]: Detected virtualization kvm. Jun 25 18:43:50.957288 systemd[1]: Detected architecture x86-64. Jun 25 18:43:50.957296 systemd[1]: Running in initrd. Jun 25 18:43:50.957304 systemd[1]: No hostname configured, using default hostname. Jun 25 18:43:50.957313 systemd[1]: Hostname set to <localhost>. Jun 25 18:43:50.957321 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:43:50.957330 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:43:50.957339 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:50.957350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:50.957359 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:43:50.957367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:50.957376 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:43:50.957385 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:43:50.957395 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:43:50.957404 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:43:50.957415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:50.957423 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:50.957432 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:50.957441 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:50.957449 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:50.957458 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:50.957466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:50.957475 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:50.957483 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:43:50.957494 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:43:50.957503 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:50.957511 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:50.957520 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:50.957528 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:50.957537 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:43:50.957545 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:50.957554 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:43:50.957565 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:43:50.957577 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:50.957589 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:50.957602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:50.957616 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:50.957630 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:50.957644 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:43:50.957687 systemd-journald[193]: Collecting audit messages is disabled. Jun 25 18:43:50.957707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:43:50.957718 systemd-journald[193]: Journal started Jun 25 18:43:50.957748 systemd-journald[193]: Runtime Journal (/run/log/journal/64d0178fdc6148408d2e9e0254444420) is 6.0M, max 48.3M, 42.3M free. Jun 25 18:43:50.960882 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:43:50.958767 systemd-modules-load[194]: Inserted module 'overlay' Jun 25 18:43:50.964304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:43:50.965708 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:50.968399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:43:50.972977 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:50.977040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:43:50.987146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:50.989181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:50.994891 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:43:50.995560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:50.998966 systemd-modules-load[194]: Inserted module 'br_netfilter' Jun 25 18:43:50.999909 kernel: Bridge firewalling registered Jun 25 18:43:51.005021 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:43:51.007143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:51.010484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:43:51.019266 dracut-cmdline[220]: dracut-dracut-053 Jun 25 18:43:51.027830 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:51.040823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:51.062056 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:43:51.092826 systemd-resolved[262]: Positive Trust Anchors: Jun 25 18:43:51.092842 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:43:51.092884 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:43:51.103546 systemd-resolved[262]: Defaulting to hostname 'linux'. Jun 25 18:43:51.105379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:43:51.106595 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:51.125885 kernel: SCSI subsystem initialized Jun 25 18:43:51.137887 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:43:51.149896 kernel: iscsi: registered transport (tcp) Jun 25 18:43:51.177891 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:43:51.177919 kernel: QLogic iSCSI HBA Driver Jun 25 18:43:51.231440 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:51.250176 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:43:51.277893 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:43:51.277927 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:43:51.279488 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:43:51.324901 kernel: raid6: avx2x4 gen() 27549 MB/s Jun 25 18:43:51.341892 kernel: raid6: avx2x2 gen() 28105 MB/s Jun 25 18:43:51.358985 kernel: raid6: avx2x1 gen() 24269 MB/s Jun 25 18:43:51.359008 kernel: raid6: using algorithm avx2x2 gen() 28105 MB/s Jun 25 18:43:51.377034 kernel: raid6: .... xor() 19543 MB/s, rmw enabled Jun 25 18:43:51.377059 kernel: raid6: using avx2x2 recovery algorithm Jun 25 18:43:51.402892 kernel: xor: automatically using best checksumming function avx Jun 25 18:43:51.573899 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:43:51.586207 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:51.594049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:51.616930 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jun 25 18:43:51.624143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:51.627997 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:43:51.644570 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jun 25 18:43:51.675593 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:51.693158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:51.760563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:51.768116 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:43:51.786054 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:51.789491 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:51.792580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:51.795123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:51.806143 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 18:43:51.833782 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:43:51.833807 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 18:43:51.834034 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:43:51.834052 kernel: GPT:9289727 != 19775487 Jun 25 18:43:51.834066 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:43:51.834081 kernel: GPT:9289727 != 19775487 Jun 25 18:43:51.834094 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:43:51.834117 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:51.835439 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:43:51.835459 kernel: AES CTR mode by8 optimization enabled Jun 25 18:43:51.835473 kernel: libata version 3.00 loaded. Jun 25 18:43:51.814537 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:43:51.834066 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:51.839633 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 18:43:51.845465 kernel: scsi host0: ata_piix Jun 25 18:43:51.845630 kernel: scsi host1: ata_piix Jun 25 18:43:51.845786 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 18:43:51.845799 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 18:43:51.863938 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (460) Jun 25 18:43:51.866006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:51.867427 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:51.872911 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Jun 25 18:43:51.872382 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:51.874306 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:51.874531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:51.876587 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:51.885195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:51.898120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 18:43:51.899210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:51.911365 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 18:43:51.915949 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 18:43:51.917318 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 18:43:51.925914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:43:51.942039 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:43:51.944206 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:51.950139 disk-uuid[544]: Primary Header is updated. Jun 25 18:43:51.950139 disk-uuid[544]: Secondary Entries is updated. Jun 25 18:43:51.950139 disk-uuid[544]: Secondary Header is updated. Jun 25 18:43:51.954166 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:51.956890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:51.973374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:52.003507 kernel: ata2: found unknown device (class 0) Jun 25 18:43:52.003565 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 18:43:52.005909 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 18:43:52.053948 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 18:43:52.066927 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:43:52.066943 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 18:43:52.958897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 18:43:52.959057 disk-uuid[545]: The operation has completed successfully. Jun 25 18:43:52.988913 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:43:52.989029 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:43:53.012008 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:43:53.017651 sh[581]: Success Jun 25 18:43:53.031899 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 18:43:53.064585 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:43:53.077365 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:43:53.082055 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:43:53.091048 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:43:53.091081 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:53.091092 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:43:53.092123 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:43:53.092884 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:43:53.097356 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:43:53.098904 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:43:53.117017 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:43:53.118635 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:43:53.128745 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:53.128776 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:53.128787 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:53.131887 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:53.141328 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:43:53.143888 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:53.153643 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:43:53.162002 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:43:53.212062 ignition[679]: Ignition 2.19.0 Jun 25 18:43:53.212076 ignition[679]: Stage: fetch-offline Jun 25 18:43:53.212119 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:53.212130 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:53.212232 ignition[679]: parsed url from cmdline: "" Jun 25 18:43:53.212237 ignition[679]: no config URL provided Jun 25 18:43:53.212242 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:43:53.212252 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:43:53.212279 ignition[679]: op(1): [started] loading QEMU firmware config module Jun 25 18:43:53.212285 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 18:43:53.222279 ignition[679]: op(1): [finished] loading QEMU firmware config module Jun 25 18:43:53.238857 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:53.249984 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:43:53.267347 ignition[679]: parsing config with SHA512: d74cf5f954f8f954b7a34987cf06827f2f6a8efb4b4eadf41051e893e9a58ffd91837ded689fa2cfbd664b6448098211c74b5faff6aebdb33e74350387787e6a Jun 25 18:43:53.270733 unknown[679]: fetched base config from "system" Jun 25 18:43:53.270747 unknown[679]: fetched user config from "qemu" Jun 25 18:43:53.271904 ignition[679]: fetch-offline: fetch-offline passed Jun 25 18:43:53.272753 ignition[679]: Ignition finished successfully Jun 25 18:43:53.273305 systemd-networkd[771]: lo: Link UP Jun 25 18:43:53.273309 systemd-networkd[771]: lo: Gained carrier Jun 25 18:43:53.274979 systemd-networkd[771]: Enumeration completed Jun 25 18:43:53.275437 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:53.275441 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:53.276340 systemd-networkd[771]: eth0: Link UP Jun 25 18:43:53.276343 systemd-networkd[771]: eth0: Gained carrier Jun 25 18:43:53.276351 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:53.276487 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:43:53.282696 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:53.289622 systemd[1]: Reached target network.target - Network. Jun 25 18:43:53.291645 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 18:43:53.294925 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:43:53.302015 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:43:53.318681 ignition[774]: Ignition 2.19.0 Jun 25 18:43:53.318695 ignition[774]: Stage: kargs Jun 25 18:43:53.318947 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:53.318961 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:53.323260 ignition[774]: kargs: kargs passed Jun 25 18:43:53.323324 ignition[774]: Ignition finished successfully Jun 25 18:43:53.327997 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:43:53.341137 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:43:53.356944 ignition[784]: Ignition 2.19.0 Jun 25 18:43:53.356958 ignition[784]: Stage: disks Jun 25 18:43:53.357185 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:53.357198 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:53.361467 ignition[784]: disks: disks passed Jun 25 18:43:53.361530 ignition[784]: Ignition finished successfully Jun 25 18:43:53.364589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:43:53.367105 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:53.367182 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:43:53.370703 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:43:53.372956 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:53.373006 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:53.387067 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:43:53.408738 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:43:53.446352 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:43:53.459023 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:43:53.560904 kernel: EXT4-fs (vda9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:43:53.561661 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:43:53.562381 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:43:53.576947 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:53.578041 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:43:53.580844 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:43:53.580898 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:43:53.580919 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:53.586462 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:43:53.587252 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:43:53.593892 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jun 25 18:43:53.596264 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:53.596318 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:53.596329 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:53.599890 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:53.601079 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:53.626940 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:43:53.631269 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:43:53.636028 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:43:53.641379 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:43:53.729545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:53.742059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:43:53.744142 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:43:53.751894 kernel: BTRFS info (device vda6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:53.770076 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:43:53.796457 ignition[923]: INFO : Ignition 2.19.0 Jun 25 18:43:53.796457 ignition[923]: INFO : Stage: mount Jun 25 18:43:53.798445 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:53.798445 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:53.798445 ignition[923]: INFO : mount: mount passed Jun 25 18:43:53.798445 ignition[923]: INFO : Ignition finished successfully Jun 25 18:43:53.805029 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:43:53.817963 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:43:54.090436 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:43:54.100996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:54.109267 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (932) Jun 25 18:43:54.109299 kernel: BTRFS info (device vda6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:54.109314 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:54.110155 kernel: BTRFS info (device vda6): using free space tree Jun 25 18:43:54.123891 kernel: BTRFS info (device vda6): auto enabling async discard Jun 25 18:43:54.125577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:54.151369 ignition[950]: INFO : Ignition 2.19.0 Jun 25 18:43:54.151369 ignition[950]: INFO : Stage: files Jun 25 18:43:54.153231 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:54.153231 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:54.156036 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:43:54.157961 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:43:54.157961 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:43:54.161504 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:43:54.162997 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:43:54.164730 unknown[950]: wrote ssh authorized keys file for user: core Jun 25 18:43:54.165919 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:43:54.168628 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:43:54.170886 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:43:55.297086 systemd-networkd[771]: eth0: Gained IPv6LL Jun 25 18:43:55.551602 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:43:58.214777 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:43:58.214777 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:58.218751 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:58.218751 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:58.218751 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:58.218751 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:58.218751 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:58.218751 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:58.228836 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:58.230561 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:58.232411 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:58.234180 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:43:58.236742 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:43:58.239189 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:43:58.241292 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 18:43:58.722175 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:43:59.059483 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 18:43:59.059483 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 18:43:59.062961 ignition[950]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 18:43:59.099762 ignition[950]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:43:59.104885 ignition[950]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 18:43:59.106774 ignition[950]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 18:43:59.106774 ignition[950]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:59.106774 ignition[950]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:59.106774 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:59.106774 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:59.106774 ignition[950]: INFO : files: files passed Jun 25 18:43:59.106774 ignition[950]: INFO : Ignition finished successfully Jun 25 18:43:59.114558 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:43:59.123015 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:43:59.127090 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:43:59.127466 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:43:59.127581 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:43:59.145265 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 18:43:59.149341 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:59.149341 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:59.153001 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:59.156185 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:59.159991 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:43:59.175141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:43:59.200751 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:43:59.202028 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:43:59.205853 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:43:59.208437 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:43:59.211101 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:43:59.223073 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:43:59.236540 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:59.247992 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:43:59.260913 systemd[1]: Stopped target network.target - Network. Jun 25 18:43:59.262119 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:59.264437 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:59.267271 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:43:59.269286 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:43:59.269391 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:59.272316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:43:59.273481 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:43:59.273816 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:43:59.274319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:59.274659 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:59.275161 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:43:59.275498 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:59.275850 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:43:59.276353 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:43:59.276686 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:43:59.277168 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:43:59.277269 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:59.277863 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:59.278376 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:59.278682 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:43:59.278768 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:59.279219 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:43:59.279325 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:59.303643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:43:59.303753 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:59.306058 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:43:59.307234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:43:59.312917 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:59.313068 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:43:59.316947 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:43:59.319158 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:43:59.319250 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:59.321439 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:43:59.321543 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:59.323318 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:43:59.323427 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:59.325562 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:43:59.325664 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:43:59.336990 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:43:59.337825 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:43:59.338522 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:43:59.339062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:43:59.339449 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:43:59.339591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:59.340495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:43:59.340683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:59.345344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:43:59.345510 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:43:59.355604 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:43:59.355780 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:43:59.358859 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:43:59.358939 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:59.362957 systemd-networkd[771]: eth0: DHCPv6 lease lost Jun 25 18:43:59.365503 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:43:59.366127 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:43:59.366281 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:43:59.368844 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:43:59.369241 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:59.377360 ignition[1004]: INFO : Ignition 2.19.0 Jun 25 18:43:59.377360 ignition[1004]: INFO : Stage: umount Jun 25 18:43:59.386297 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:59.386297 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 18:43:59.386297 ignition[1004]: INFO : umount: umount passed Jun 25 18:43:59.386297 ignition[1004]: INFO : Ignition finished successfully Jun 25 18:43:59.381026 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:43:59.382082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:43:59.382173 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:59.384419 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:43:59.384483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:59.386327 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:43:59.386386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:59.389213 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:43:59.389348 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:43:59.391522 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:43:59.391659 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:43:59.401738 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:43:59.401810 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:43:59.403321 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:43:59.403379 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:43:59.405444 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:43:59.405503 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:43:59.407597 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:43:59.407653 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:59.409942 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:43:59.410007 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:59.412269 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:59.414701 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:43:59.414840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:43:59.426782 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:43:59.427012 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:59.428696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:43:59.428757 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:59.430745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:43:59.430792 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:59.432974 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:43:59.433035 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:59.435652 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:43:59.435710 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:59.437803 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:59.437877 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:59.452006 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:43:59.453209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:43:59.453275 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:59.455748 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:43:59.455806 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:43:59.458322 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:43:59.458381 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:59.461125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:59.461182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:59.462846 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:43:59.462994 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:43:59.465693 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:43:59.476014 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:43:59.484928 systemd[1]: Switching root. Jun 25 18:43:59.512844 systemd-journald[193]: Journal stopped Jun 25 18:44:00.647369 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jun 25 18:44:00.647443 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:44:00.647457 kernel: SELinux: policy capability open_perms=1 Jun 25 18:44:00.647469 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:44:00.647480 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:44:00.647491 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:44:00.647516 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:44:00.647527 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:44:00.647545 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:44:00.647557 kernel: audit: type=1403 audit(1719341039.933:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:44:00.647574 systemd[1]: Successfully loaded SELinux policy in 39.043ms. Jun 25 18:44:00.647595 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.544ms. Jun 25 18:44:00.647609 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:44:00.647626 systemd[1]: Detected virtualization kvm. Jun 25 18:44:00.647640 systemd[1]: Detected architecture x86-64. Jun 25 18:44:00.647654 systemd[1]: Detected first boot. Jun 25 18:44:00.647666 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:44:00.647679 zram_generator::config[1048]: No configuration found. Jun 25 18:44:00.647692 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:44:00.647704 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:44:00.647716 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:44:00.647729 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:44:00.647742 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:44:00.647756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:44:00.647768 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:44:00.647780 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:44:00.647793 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:44:00.647805 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:44:00.647818 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:44:00.647830 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:44:00.647842 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:44:00.647855 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:44:00.647883 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:44:00.647895 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:44:00.647908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:44:00.647921 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:44:00.647933 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:44:00.647945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:44:00.647957 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:44:00.647969 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:44:00.647981 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:44:00.647996 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:44:00.648008 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:44:00.648025 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:44:00.648038 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:44:00.648050 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:44:00.648062 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:44:00.648075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:44:00.648087 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:44:00.648102 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:44:00.648114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:44:00.648126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:44:00.648138 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:44:00.648150 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:44:00.648162 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:44:00.648175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:00.648188 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:44:00.648200 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:44:00.648214 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:44:00.648227 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:44:00.648239 systemd[1]: Reached target machines.target - Containers. Jun 25 18:44:00.648251 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:44:00.648263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:44:00.648275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:44:00.648287 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:44:00.648299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:44:00.648314 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:44:00.648331 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:44:00.648343 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:44:00.648356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:44:00.648368 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:44:00.648381 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:44:00.648393 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:44:00.648405 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:44:00.648419 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:44:00.648431 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:44:00.648445 kernel: fuse: init (API version 7.39) Jun 25 18:44:00.648459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:44:00.648472 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:44:00.648486 kernel: loop: module loaded Jun 25 18:44:00.648509 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:44:00.648521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:44:00.648553 systemd-journald[1117]: Collecting audit messages is disabled. Jun 25 18:44:00.648578 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:44:00.648591 systemd[1]: Stopped verity-setup.service. Jun 25 18:44:00.648603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:00.648616 systemd-journald[1117]: Journal started Jun 25 18:44:00.648640 systemd-journald[1117]: Runtime Journal (/run/log/journal/64d0178fdc6148408d2e9e0254444420) is 6.0M, max 48.3M, 42.3M free. Jun 25 18:44:00.433043 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:44:00.452355 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 18:44:00.452821 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:44:00.653919 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:44:00.652816 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:44:00.654127 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:44:00.655407 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:44:00.656590 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:44:00.657857 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:44:00.659425 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:44:00.660770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:44:00.662567 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:44:00.662740 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:44:00.664410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:44:00.664595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:44:00.666148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:44:00.666327 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:44:00.668110 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:44:00.669751 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:44:00.670010 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:44:00.671621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:44:00.671811 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:44:00.673270 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:44:00.674746 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:44:00.676322 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:44:00.690987 kernel: ACPI: bus type drm_connector registered Jun 25 18:44:00.692381 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:44:00.692588 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:44:00.694851 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:44:00.708987 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:44:00.711697 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:44:00.712934 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:44:00.713024 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:44:00.715102 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:44:00.717477 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:44:00.723418 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:44:00.724648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:44:00.727237 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:44:00.729600 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:44:00.730855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:44:00.734594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:44:00.736142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:44:00.737832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:44:00.743047 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:44:00.746196 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:44:00.749464 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:44:00.751445 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:44:00.753777 systemd-journald[1117]: Time spent on flushing to /var/log/journal/64d0178fdc6148408d2e9e0254444420 is 26.031ms for 990 entries. Jun 25 18:44:00.753777 systemd-journald[1117]: System Journal (/var/log/journal/64d0178fdc6148408d2e9e0254444420) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:44:00.789274 systemd-journald[1117]: Received client request to flush runtime journal. Jun 25 18:44:00.789314 kernel: loop0: detected capacity change from 0 to 139760 Jun 25 18:44:00.755327 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:44:00.764420 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:44:00.767365 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:44:00.771298 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:44:00.779092 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:44:00.784007 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:44:00.793828 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:44:00.796889 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:44:00.802616 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:44:00.807840 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:44:00.817920 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jun 25 18:44:00.817946 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jun 25 18:44:00.823397 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:44:00.824349 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:44:00.827070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:44:00.831909 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:44:00.837074 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:44:00.855899 kernel: loop1: detected capacity change from 0 to 210664 Jun 25 18:44:00.868284 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:44:00.876057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:44:00.885905 kernel: loop2: detected capacity change from 0 to 80568 Jun 25 18:44:00.900057 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jun 25 18:44:00.900418 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jun 25 18:44:00.906498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:44:00.938906 kernel: loop3: detected capacity change from 0 to 139760 Jun 25 18:44:00.951017 kernel: loop4: detected capacity change from 0 to 210664 Jun 25 18:44:00.958910 kernel: loop5: detected capacity change from 0 to 80568 Jun 25 18:44:00.964771 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 18:44:00.966760 (sd-merge)[1191]: Merged extensions into '/usr'. Jun 25 18:44:00.971276 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:44:00.971293 systemd[1]: Reloading... Jun 25 18:44:01.022887 zram_generator::config[1215]: No configuration found. Jun 25 18:44:01.089512 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:44:01.146706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:01.197992 systemd[1]: Reloading finished in 226 ms. Jun 25 18:44:01.233565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:44:01.235476 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:44:01.249158 systemd[1]: Starting ensure-sysext.service... Jun 25 18:44:01.251638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:44:01.256586 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:44:01.256682 systemd[1]: Reloading... Jun 25 18:44:01.274650 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:44:01.275044 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:44:01.276182 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:44:01.276494 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jun 25 18:44:01.276572 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jun 25 18:44:01.279982 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:44:01.279994 systemd-tmpfiles[1253]: Skipping /boot Jun 25 18:44:01.293562 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:44:01.293582 systemd-tmpfiles[1253]: Skipping /boot Jun 25 18:44:01.315905 zram_generator::config[1280]: No configuration found. Jun 25 18:44:01.428333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:01.478046 systemd[1]: Reloading finished in 220 ms. Jun 25 18:44:01.498056 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:44:01.499917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:44:01.517895 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:01.520723 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:44:01.523151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:44:01.527036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:44:01.530264 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:44:01.535915 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:44:01.543621 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:44:01.546731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:01.546932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:44:01.551953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:44:01.554967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:44:01.559082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:44:01.560441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:44:01.560572 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:01.561716 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jun 25 18:44:01.566914 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:44:01.569172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:44:01.569404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:44:01.573524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:44:01.573798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:44:01.576120 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:44:01.576328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:44:01.585054 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:44:01.592333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:01.592632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:44:01.593770 augenrules[1346]: No rules Jun 25 18:44:01.602266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:44:01.609169 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:44:01.612584 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:44:01.620209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:44:01.621383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:44:01.623436 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:44:01.624685 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:01.625645 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:44:01.627268 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:44:01.629567 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:01.631414 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:44:01.633520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:44:01.633695 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:44:01.635501 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:44:01.635669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:44:01.637894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:44:01.638098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:44:01.641323 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:44:01.641561 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:44:01.652889 systemd[1]: Finished ensure-sysext.service. Jun 25 18:44:01.672890 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1370) Jun 25 18:44:01.675159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:44:01.677887 systemd-resolved[1320]: Positive Trust Anchors: Jun 25 18:44:01.678305 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:44:01.678405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:44:01.678885 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1367) Jun 25 18:44:01.678940 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:44:01.679015 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:44:01.682431 systemd-resolved[1320]: Defaulting to hostname 'linux'. Jun 25 18:44:01.688090 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 18:44:01.689552 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:44:01.689760 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:44:01.691391 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:44:01.700733 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 18:44:01.700858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:44:01.720946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 18:44:01.731845 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:44:01.748975 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:44:01.755778 systemd-networkd[1387]: lo: Link UP Jun 25 18:44:01.755789 systemd-networkd[1387]: lo: Gained carrier Jun 25 18:44:01.756969 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 18:44:01.759360 systemd-networkd[1387]: Enumeration completed Jun 25 18:44:01.761387 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:44:01.759457 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:44:01.759797 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:44:01.759802 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:44:01.761742 systemd[1]: Reached target network.target - Network. Jun 25 18:44:01.761820 systemd-networkd[1387]: eth0: Link UP Jun 25 18:44:01.761824 systemd-networkd[1387]: eth0: Gained carrier Jun 25 18:44:01.761837 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:44:01.769658 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jun 25 18:44:01.770168 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:44:01.773009 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 18:44:01.782483 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 18:44:01.784288 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:44:01.785487 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 18:44:01.786325 systemd-timesyncd[1390]: Initial clock synchronization to Tue 2024-06-25 18:44:02.184943 UTC. Jun 25 18:44:01.794913 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 18:44:01.835139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:44:01.840922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:44:01.841160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:44:01.845884 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:44:01.848065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:44:01.914265 kernel: kvm_amd: TSC scaling supported Jun 25 18:44:01.914331 kernel: kvm_amd: Nested Virtualization enabled Jun 25 18:44:01.914345 kernel: kvm_amd: Nested Paging enabled Jun 25 18:44:01.914358 kernel: kvm_amd: LBR virtualization supported Jun 25 18:44:01.914990 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 25 18:44:01.916261 kernel: kvm_amd: Virtual GIF supported Jun 25 18:44:01.936925 kernel: EDAC MC: Ver: 3.0.0 Jun 25 18:44:01.944458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:44:01.978063 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:44:01.990310 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:44:01.999889 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:44:02.032461 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:44:02.034129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:44:02.035310 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:44:02.036552 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:44:02.038004 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:44:02.039629 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:44:02.040991 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:44:02.042347 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:44:02.043699 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:44:02.043731 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:44:02.044826 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:44:02.047058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:44:02.050330 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:44:02.061087 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:44:02.063685 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:44:02.065400 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:44:02.066676 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:44:02.067740 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:44:02.068795 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:44:02.068823 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:44:02.069863 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:44:02.072091 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:44:02.076013 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:44:02.077027 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:44:02.080223 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:44:02.082284 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:44:02.083497 jq[1424]: false Jun 25 18:44:02.084058 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:44:02.087730 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:44:02.092545 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:44:02.095623 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:44:02.102113 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:44:02.104013 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:44:02.104588 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:44:02.106246 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:44:02.109154 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:44:02.111692 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:44:02.116311 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:44:02.116556 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:44:02.117042 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:44:02.117970 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:44:02.120527 dbus-daemon[1423]: [system] SELinux support is enabled Jun 25 18:44:02.120786 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:44:02.123864 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:44:02.124178 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:44:02.126186 extend-filesystems[1425]: Found loop3 Jun 25 18:44:02.127377 extend-filesystems[1425]: Found loop4 Jun 25 18:44:02.127377 extend-filesystems[1425]: Found loop5 Jun 25 18:44:02.127377 extend-filesystems[1425]: Found sr0 Jun 25 18:44:02.127377 extend-filesystems[1425]: Found vda Jun 25 18:44:02.127377 extend-filesystems[1425]: Found vda1 Jun 25 18:44:02.134672 extend-filesystems[1425]: Found vda2 Jun 25 18:44:02.134672 extend-filesystems[1425]: Found vda3 Jun 25 18:44:02.134672 extend-filesystems[1425]: Found usr Jun 25 18:44:02.134672 extend-filesystems[1425]: Found vda4 Jun 25 18:44:02.134672 extend-filesystems[1425]: Found vda6 Jun 25 18:44:02.134672 extend-filesystems[1425]: Found vda7 Jun 25 18:44:02.134672 extend-filesystems[1425]: Found vda9 Jun 25 18:44:02.134672 extend-filesystems[1425]: Checking size of /dev/vda9 Jun 25 18:44:02.146815 jq[1438]: true Jun 25 18:44:02.143394 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:44:02.143436 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:44:02.148411 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:44:02.148439 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:44:02.156935 tar[1442]: linux-amd64/helm Jun 25 18:44:02.161633 extend-filesystems[1425]: Resized partition /dev/vda9 Jun 25 18:44:02.163020 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:44:02.164813 extend-filesystems[1459]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:44:02.171906 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 18:44:02.177666 update_engine[1437]: I0625 18:44:02.177556 1437 main.cc:92] Flatcar Update Engine starting Jun 25 18:44:02.185279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1369) Jun 25 18:44:02.183617 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:44:02.185423 update_engine[1437]: I0625 18:44:02.183147 1437 update_check_scheduler.cc:74] Next update check in 10m36s Jun 25 18:44:02.191418 jq[1450]: true Jun 25 18:44:02.191116 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:44:02.210949 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 18:44:02.238439 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:44:02.238742 systemd-logind[1433]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:44:02.238763 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:44:02.239114 systemd-logind[1433]: New seat seat0. Jun 25 18:44:02.240400 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 18:44:02.240400 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:44:02.240400 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 18:44:02.248109 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Jun 25 18:44:02.246777 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:44:02.249739 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:44:02.250060 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:44:02.273932 bash[1484]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:44:02.276909 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:44:02.279364 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:44:02.399969 containerd[1452]: time="2024-06-25T18:44:02.399853315Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:44:02.430495 containerd[1452]: time="2024-06-25T18:44:02.430376959Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:44:02.430495 containerd[1452]: time="2024-06-25T18:44:02.430426830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:02.432466 containerd[1452]: time="2024-06-25T18:44:02.432431255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.432528284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.432829921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.432846582Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.432964471Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.433035444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.433048192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.433159559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.433456914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.433474429Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.433483928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:02.433777 containerd[1452]: time="2024-06-25T18:44:02.433596798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:02.434065 containerd[1452]: time="2024-06-25T18:44:02.433613765Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:44:02.434065 containerd[1452]: time="2024-06-25T18:44:02.433694616Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:44:02.434065 containerd[1452]: time="2024-06-25T18:44:02.433711751Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:44:02.439197 containerd[1452]: time="2024-06-25T18:44:02.439169298Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:44:02.439244 containerd[1452]: time="2024-06-25T18:44:02.439198856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:44:02.439244 containerd[1452]: time="2024-06-25T18:44:02.439212889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:44:02.439282 containerd[1452]: time="2024-06-25T18:44:02.439247813Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:44:02.439282 containerd[1452]: time="2024-06-25T18:44:02.439262571Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:44:02.439282 containerd[1452]: time="2024-06-25T18:44:02.439273889Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:44:02.439344 containerd[1452]: time="2024-06-25T18:44:02.439288711Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:44:02.439459 containerd[1452]: time="2024-06-25T18:44:02.439438830Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:44:02.439496 containerd[1452]: time="2024-06-25T18:44:02.439463244Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:44:02.439496 containerd[1452]: time="2024-06-25T18:44:02.439480665Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:44:02.439534 containerd[1452]: time="2024-06-25T18:44:02.439499557Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:44:02.439534 containerd[1452]: time="2024-06-25T18:44:02.439517534Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.439569 containerd[1452]: time="2024-06-25T18:44:02.439539392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.439569 containerd[1452]: time="2024-06-25T18:44:02.439557097Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.439610 containerd[1452]: time="2024-06-25T18:44:02.439574285Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.439610 containerd[1452]: time="2024-06-25T18:44:02.439594839Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.439648 containerd[1452]: time="2024-06-25T18:44:02.439612995Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.439648 containerd[1452]: time="2024-06-25T18:44:02.439631688Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.439683 containerd[1452]: time="2024-06-25T18:44:02.439648109Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:44:02.439798 containerd[1452]: time="2024-06-25T18:44:02.439772950Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440105187Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440136008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440151344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440172899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440230607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440248173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440260365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440273104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440285716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440297708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440309605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.440339 containerd[1452]: time="2024-06-25T18:44:02.440321060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440659093Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440817227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440832911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440844546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440857442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440868771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440881531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440894628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441584 containerd[1452]: time="2024-06-25T18:44:02.440905737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:44:02.441777 containerd[1452]: time="2024-06-25T18:44:02.441170230Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:44:02.441777 containerd[1452]: time="2024-06-25T18:44:02.441217640Z" level=info msg="Connect containerd service" Jun 25 18:44:02.441777 containerd[1452]: time="2024-06-25T18:44:02.441239583Z" level=info msg="using legacy CRI server" Jun 25 18:44:02.441777 containerd[1452]: time="2024-06-25T18:44:02.441246104Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:44:02.441777 containerd[1452]: time="2024-06-25T18:44:02.441314721Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:44:02.442391 containerd[1452]: time="2024-06-25T18:44:02.442370139Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:44:02.442488 containerd[1452]: time="2024-06-25T18:44:02.442474415Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:44:02.442612 containerd[1452]: time="2024-06-25T18:44:02.442597500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:44:02.442705 containerd[1452]: time="2024-06-25T18:44:02.442687438Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:44:02.442771 containerd[1452]: time="2024-06-25T18:44:02.442752037Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:44:02.442932 containerd[1452]: time="2024-06-25T18:44:02.442565374Z" level=info msg="Start subscribing containerd event" Jun 25 18:44:02.442999 containerd[1452]: time="2024-06-25T18:44:02.442988233Z" level=info msg="Start recovering state" Jun 25 18:44:02.443094 containerd[1452]: time="2024-06-25T18:44:02.443082779Z" level=info msg="Start event monitor" Jun 25 18:44:02.443141 containerd[1452]: time="2024-06-25T18:44:02.443130726Z" level=info msg="Start snapshots syncer" Jun 25 18:44:02.443195 containerd[1452]: time="2024-06-25T18:44:02.443183848Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:44:02.443236 containerd[1452]: time="2024-06-25T18:44:02.443226629Z" level=info msg="Start streaming server" Jun 25 18:44:02.443673 containerd[1452]: time="2024-06-25T18:44:02.443657010Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:44:02.443835 containerd[1452]: time="2024-06-25T18:44:02.443821224Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:44:02.446344 containerd[1452]: time="2024-06-25T18:44:02.446328822Z" level=info msg="containerd successfully booted in 0.048566s" Jun 25 18:44:02.446601 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:44:02.449179 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:44:02.477772 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:44:02.493187 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:44:02.504242 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:44:02.504546 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:44:02.512168 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:44:02.530821 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:44:02.541284 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:44:02.543817 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:44:02.545470 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:44:02.693543 tar[1442]: linux-amd64/LICENSE Jun 25 18:44:02.693735 tar[1442]: linux-amd64/README.md Jun 25 18:44:02.712683 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:44:02.918091 systemd-networkd[1387]: eth0: Gained IPv6LL Jun 25 18:44:02.923014 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:44:02.925397 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:44:02.945363 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 18:44:02.948801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:02.951479 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:44:02.974371 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 18:44:02.974733 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 18:44:02.976632 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:44:02.979981 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:44:03.669209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:03.670981 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:44:03.672951 systemd[1]: Startup finished in 824ms (kernel) + 9.206s (initrd) + 3.776s (userspace) = 13.807s. Jun 25 18:44:03.676044 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:04.145279 kubelet[1537]: E0625 18:44:04.145149 1537 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:04.149255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:04.149469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:04.149823 systemd[1]: kubelet.service: Consumed 1.017s CPU time. Jun 25 18:44:05.318518 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:44:05.319760 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:35656.service - OpenSSH per-connection server daemon (10.0.0.1:35656). Jun 25 18:44:05.366079 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 35656 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:44:05.367934 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:05.377023 systemd-logind[1433]: New session 1 of user core. Jun 25 18:44:05.378337 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:44:05.386134 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:44:05.398070 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:44:05.401201 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:44:05.409702 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:05.517197 systemd[1556]: Queued start job for default target default.target. Jun 25 18:44:05.527269 systemd[1556]: Created slice app.slice - User Application Slice. Jun 25 18:44:05.527294 systemd[1556]: Reached target paths.target - Paths. Jun 25 18:44:05.527308 systemd[1556]: Reached target timers.target - Timers. Jun 25 18:44:05.528846 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:44:05.540460 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:44:05.540581 systemd[1556]: Reached target sockets.target - Sockets. Jun 25 18:44:05.540595 systemd[1556]: Reached target basic.target - Basic System. Jun 25 18:44:05.540630 systemd[1556]: Reached target default.target - Main User Target. Jun 25 18:44:05.540665 systemd[1556]: Startup finished in 123ms. Jun 25 18:44:05.541407 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:44:05.543008 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:44:05.606876 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:35670.service - OpenSSH per-connection server daemon (10.0.0.1:35670). Jun 25 18:44:05.645765 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 35670 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:44:05.647445 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:05.651746 systemd-logind[1433]: New session 2 of user core. Jun 25 18:44:05.662015 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:44:05.718688 sshd[1567]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:05.734862 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:35670.service: Deactivated successfully. Jun 25 18:44:05.736778 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:44:05.738299 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:44:05.739549 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:55808.service - OpenSSH per-connection server daemon (10.0.0.1:55808). Jun 25 18:44:05.740413 systemd-logind[1433]: Removed session 2. Jun 25 18:44:05.775260 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 55808 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:44:05.776619 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:05.780461 systemd-logind[1433]: New session 3 of user core. Jun 25 18:44:05.791036 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:44:05.841859 sshd[1574]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:05.858848 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:55808.service: Deactivated successfully. Jun 25 18:44:05.861044 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:44:05.862956 systemd-logind[1433]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:44:05.872176 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:55810.service - OpenSSH per-connection server daemon (10.0.0.1:55810). Jun 25 18:44:05.873330 systemd-logind[1433]: Removed session 3. Jun 25 18:44:05.904511 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 55810 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:44:05.905987 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:05.909848 systemd-logind[1433]: New session 4 of user core. Jun 25 18:44:05.920008 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:44:05.975930 sshd[1581]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:05.992963 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:55810.service: Deactivated successfully. Jun 25 18:44:05.994622 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:44:05.996210 systemd-logind[1433]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:44:06.004180 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:55814.service - OpenSSH per-connection server daemon (10.0.0.1:55814). Jun 25 18:44:06.005120 systemd-logind[1433]: Removed session 4. Jun 25 18:44:06.036968 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 55814 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:44:06.038632 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:06.042643 systemd-logind[1433]: New session 5 of user core. Jun 25 18:44:06.050008 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:44:06.109477 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:44:06.109773 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:06.133643 sudo[1593]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:06.135268 sshd[1590]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:06.154543 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:55814.service: Deactivated successfully. Jun 25 18:44:06.156114 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:44:06.157552 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:44:06.166123 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:55826.service - OpenSSH per-connection server daemon (10.0.0.1:55826). Jun 25 18:44:06.166850 systemd-logind[1433]: Removed session 5. Jun 25 18:44:06.197316 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 55826 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:44:06.198642 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:06.202400 systemd-logind[1433]: New session 6 of user core. Jun 25 18:44:06.213999 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:44:06.268364 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:44:06.268657 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:06.271977 sudo[1603]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:06.277585 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:44:06.277956 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:06.296159 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:06.297932 auditctl[1606]: No rules Jun 25 18:44:06.298337 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:44:06.298552 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:06.301226 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:06.329307 augenrules[1624]: No rules Jun 25 18:44:06.331179 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:06.332392 sudo[1602]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:06.334102 sshd[1598]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:06.344489 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:55826.service: Deactivated successfully. Jun 25 18:44:06.346022 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:44:06.347380 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:44:06.348568 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:55832.service - OpenSSH per-connection server daemon (10.0.0.1:55832). Jun 25 18:44:06.349342 systemd-logind[1433]: Removed session 6. Jun 25 18:44:06.383787 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 55832 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:44:06.385025 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:06.388660 systemd-logind[1433]: New session 7 of user core. Jun 25 18:44:06.398038 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:44:06.451480 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:44:06.451764 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:06.552126 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:44:06.552232 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:44:06.798167 dockerd[1646]: time="2024-06-25T18:44:06.798028223Z" level=info msg="Starting up" Jun 25 18:44:07.238761 dockerd[1646]: time="2024-06-25T18:44:07.238704517Z" level=info msg="Loading containers: start." Jun 25 18:44:07.380925 kernel: Initializing XFRM netlink socket Jun 25 18:44:07.474021 systemd-networkd[1387]: docker0: Link UP Jun 25 18:44:07.488475 dockerd[1646]: time="2024-06-25T18:44:07.488429143Z" level=info msg="Loading containers: done." Jun 25 18:44:07.535786 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3251855749-merged.mount: Deactivated successfully. Jun 25 18:44:07.539194 dockerd[1646]: time="2024-06-25T18:44:07.539153751Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:44:07.539373 dockerd[1646]: time="2024-06-25T18:44:07.539345182Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:44:07.539491 dockerd[1646]: time="2024-06-25T18:44:07.539467428Z" level=info msg="Daemon has completed initialization" Jun 25 18:44:07.569435 dockerd[1646]: time="2024-06-25T18:44:07.569380474Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:44:07.569588 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:44:13.147100 containerd[1452]: time="2024-06-25T18:44:13.147047450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 18:44:13.740176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919399766.mount: Deactivated successfully. Jun 25 18:44:14.237265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:44:14.250202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:14.418896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:14.423967 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:14.685984 kubelet[1846]: E0625 18:44:14.685804 1846 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:14.750959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:14.751254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:15.612311 containerd[1452]: time="2024-06-25T18:44:15.612234982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:15.613100 containerd[1452]: time="2024-06-25T18:44:15.613046339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jun 25 18:44:15.614193 containerd[1452]: time="2024-06-25T18:44:15.614164563Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:15.619646 containerd[1452]: time="2024-06-25T18:44:15.619594381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:15.620968 containerd[1452]: time="2024-06-25T18:44:15.620923387Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 2.473829141s" Jun 25 18:44:15.621017 containerd[1452]: time="2024-06-25T18:44:15.620965039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 18:44:15.647372 containerd[1452]: time="2024-06-25T18:44:15.647322873Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 18:44:18.067393 containerd[1452]: time="2024-06-25T18:44:18.067324519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:18.068051 containerd[1452]: time="2024-06-25T18:44:18.068014620Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jun 25 18:44:18.069263 containerd[1452]: time="2024-06-25T18:44:18.069221264Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:18.071935 containerd[1452]: time="2024-06-25T18:44:18.071891281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:18.072885 containerd[1452]: time="2024-06-25T18:44:18.072826225Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.425457077s" Jun 25 18:44:18.072885 containerd[1452]: time="2024-06-25T18:44:18.072880767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 18:44:18.096623 containerd[1452]: time="2024-06-25T18:44:18.096582296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 18:44:19.246499 containerd[1452]: time="2024-06-25T18:44:19.246417563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:19.247404 containerd[1452]: time="2024-06-25T18:44:19.247357692Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jun 25 18:44:19.248683 containerd[1452]: time="2024-06-25T18:44:19.248647397Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:19.251443 containerd[1452]: time="2024-06-25T18:44:19.251374916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:19.252457 containerd[1452]: time="2024-06-25T18:44:19.252422004Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.155803808s" Jun 25 18:44:19.252495 containerd[1452]: time="2024-06-25T18:44:19.252456787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 18:44:19.276143 containerd[1452]: time="2024-06-25T18:44:19.276102459Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 18:44:20.869115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263537832.mount: Deactivated successfully. Jun 25 18:44:21.623114 containerd[1452]: time="2024-06-25T18:44:21.623057933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:21.623792 containerd[1452]: time="2024-06-25T18:44:21.623755893Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jun 25 18:44:21.624947 containerd[1452]: time="2024-06-25T18:44:21.624892992Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:21.626939 containerd[1452]: time="2024-06-25T18:44:21.626896868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:21.627548 containerd[1452]: time="2024-06-25T18:44:21.627509895Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 2.351368113s" Jun 25 18:44:21.627583 containerd[1452]: time="2024-06-25T18:44:21.627548087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 18:44:21.649644 containerd[1452]: time="2024-06-25T18:44:21.649589841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 18:44:22.465848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450781514.mount: Deactivated successfully. Jun 25 18:44:23.523682 containerd[1452]: time="2024-06-25T18:44:23.523613492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.524461 containerd[1452]: time="2024-06-25T18:44:23.524392829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 18:44:23.525727 containerd[1452]: time="2024-06-25T18:44:23.525695632Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.528981 containerd[1452]: time="2024-06-25T18:44:23.528940473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.530590 containerd[1452]: time="2024-06-25T18:44:23.530531996Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.880890568s" Jun 25 18:44:23.530590 containerd[1452]: time="2024-06-25T18:44:23.530579549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 18:44:23.614191 containerd[1452]: time="2024-06-25T18:44:23.614135520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:44:24.066106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700494149.mount: Deactivated successfully. Jun 25 18:44:24.073229 containerd[1452]: time="2024-06-25T18:44:24.073186416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.073985 containerd[1452]: time="2024-06-25T18:44:24.073933505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:44:24.075102 containerd[1452]: time="2024-06-25T18:44:24.075071728Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.077286 containerd[1452]: time="2024-06-25T18:44:24.077253948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.078042 containerd[1452]: time="2024-06-25T18:44:24.078008922Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 463.836233ms" Jun 25 18:44:24.078113 containerd[1452]: time="2024-06-25T18:44:24.078043348Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:44:24.156973 containerd[1452]: time="2024-06-25T18:44:24.156899766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 18:44:24.699594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:44:24.721770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:24.725761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010788356.mount: Deactivated successfully. Jun 25 18:44:24.879200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:24.885624 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:25.041059 kubelet[1973]: E0625 18:44:25.040851 1973 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:25.045728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:25.046020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:27.397353 containerd[1452]: time="2024-06-25T18:44:27.397245610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.404242 containerd[1452]: time="2024-06-25T18:44:27.404156204Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jun 25 18:44:27.405631 containerd[1452]: time="2024-06-25T18:44:27.405600252Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.409744 containerd[1452]: time="2024-06-25T18:44:27.409714273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.410827 containerd[1452]: time="2024-06-25T18:44:27.410792811Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.253847387s" Jun 25 18:44:27.410895 containerd[1452]: time="2024-06-25T18:44:27.410828219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 18:44:29.390743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:29.403030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:29.424034 systemd[1]: Reloading requested from client PID 2101 ('systemctl') (unit session-7.scope)... Jun 25 18:44:29.424049 systemd[1]: Reloading... Jun 25 18:44:29.498947 zram_generator::config[2143]: No configuration found. Jun 25 18:44:29.714590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:29.804327 systemd[1]: Reloading finished in 379 ms. Jun 25 18:44:29.861282 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:29.864319 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:29.864563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:29.866125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:30.023684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:30.028652 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:30.281488 kubelet[2190]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:30.281488 kubelet[2190]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:30.281488 kubelet[2190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:30.282783 kubelet[2190]: I0625 18:44:30.282730 2190 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:30.816411 kubelet[2190]: I0625 18:44:30.816353 2190 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:44:30.816411 kubelet[2190]: I0625 18:44:30.816392 2190 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:30.816703 kubelet[2190]: I0625 18:44:30.816674 2190 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:44:30.841306 kubelet[2190]: I0625 18:44:30.841230 2190 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:30.841996 kubelet[2190]: E0625 18:44:30.841941 2190 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.857963 kubelet[2190]: I0625 18:44:30.857892 2190 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:30.859533 kubelet[2190]: I0625 18:44:30.859477 2190 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:30.859754 kubelet[2190]: I0625 18:44:30.859515 2190 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:30.860370 kubelet[2190]: I0625 18:44:30.860329 2190 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:30.860370 kubelet[2190]: I0625 18:44:30.860350 2190 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:30.860537 kubelet[2190]: I0625 18:44:30.860485 2190 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:30.861343 kubelet[2190]: I0625 18:44:30.861321 2190 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:44:30.861383 kubelet[2190]: I0625 18:44:30.861349 2190 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:30.861383 kubelet[2190]: I0625 18:44:30.861372 2190 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:44:30.861428 kubelet[2190]: I0625 18:44:30.861390 2190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:30.862361 kubelet[2190]: W0625 18:44:30.862207 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.862361 kubelet[2190]: E0625 18:44:30.862307 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.862883 kubelet[2190]: W0625 18:44:30.862808 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.862957 kubelet[2190]: E0625 18:44:30.862890 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.866240 kubelet[2190]: I0625 18:44:30.866206 2190 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:30.867751 kubelet[2190]: I0625 18:44:30.867716 2190 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:44:30.867829 kubelet[2190]: W0625 18:44:30.867774 2190 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:44:30.868535 kubelet[2190]: I0625 18:44:30.868396 2190 server.go:1264] "Started kubelet" Jun 25 18:44:30.868535 kubelet[2190]: I0625 18:44:30.868486 2190 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:44:30.869377 kubelet[2190]: I0625 18:44:30.869013 2190 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:30.869377 kubelet[2190]: I0625 18:44:30.869057 2190 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:30.869624 kubelet[2190]: I0625 18:44:30.869602 2190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:30.870274 kubelet[2190]: I0625 18:44:30.870245 2190 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:44:30.872384 kubelet[2190]: E0625 18:44:30.872139 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:30.872384 kubelet[2190]: I0625 18:44:30.872184 2190 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:30.872384 kubelet[2190]: I0625 18:44:30.872278 2190 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:44:30.872384 kubelet[2190]: I0625 18:44:30.872369 2190 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:44:30.873197 kubelet[2190]: W0625 18:44:30.872674 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.873197 kubelet[2190]: E0625 18:44:30.872737 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.873197 kubelet[2190]: E0625 18:44:30.872987 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Jun 25 18:44:30.874610 kubelet[2190]: I0625 18:44:30.874572 2190 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:44:30.874743 kubelet[2190]: I0625 18:44:30.874644 2190 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:44:30.875502 kubelet[2190]: E0625 18:44:30.875437 2190 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:30.875880 kubelet[2190]: I0625 18:44:30.875855 2190 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:44:30.878939 kubelet[2190]: E0625 18:44:30.878305 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc539830d96a59 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 18:44:30.868376153 +0000 UTC m=+0.700119308,LastTimestamp:2024-06-25 18:44:30.868376153 +0000 UTC m=+0.700119308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 18:44:30.910639 kubelet[2190]: I0625 18:44:30.910598 2190 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:30.910639 kubelet[2190]: I0625 18:44:30.910623 2190 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:30.910824 kubelet[2190]: I0625 18:44:30.910650 2190 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:30.916273 kubelet[2190]: I0625 18:44:30.916039 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:30.917834 kubelet[2190]: I0625 18:44:30.917804 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:30.917905 kubelet[2190]: I0625 18:44:30.917843 2190 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:30.917905 kubelet[2190]: I0625 18:44:30.917863 2190 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:44:30.917984 kubelet[2190]: E0625 18:44:30.917932 2190 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:30.918572 kubelet[2190]: W0625 18:44:30.918370 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.918572 kubelet[2190]: E0625 18:44:30.918400 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:30.973725 kubelet[2190]: I0625 18:44:30.973660 2190 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:30.974195 kubelet[2190]: E0625 18:44:30.974170 2190 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jun 25 18:44:31.018556 kubelet[2190]: E0625 18:44:31.018460 2190 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:44:31.074690 kubelet[2190]: E0625 18:44:31.074538 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Jun 25 18:44:31.176368 kubelet[2190]: I0625 18:44:31.176325 2190 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:31.176848 kubelet[2190]: E0625 18:44:31.176793 2190 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jun 25 18:44:31.219112 kubelet[2190]: E0625 18:44:31.219045 2190 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:44:31.476045 kubelet[2190]: E0625 18:44:31.475931 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Jun 25 18:44:31.578631 kubelet[2190]: I0625 18:44:31.578609 2190 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:31.579018 kubelet[2190]: E0625 18:44:31.578985 2190 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jun 25 18:44:31.620259 kubelet[2190]: E0625 18:44:31.620195 2190 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:44:31.650889 kubelet[2190]: I0625 18:44:31.650840 2190 policy_none.go:49] "None policy: Start" Jun 25 18:44:31.651773 kubelet[2190]: I0625 18:44:31.651710 2190 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:44:31.651773 kubelet[2190]: I0625 18:44:31.651753 2190 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:31.660772 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:44:31.678413 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:44:31.681730 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:44:31.692837 kubelet[2190]: I0625 18:44:31.692802 2190 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:31.693133 kubelet[2190]: I0625 18:44:31.693092 2190 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:44:31.693309 kubelet[2190]: I0625 18:44:31.693229 2190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:31.694164 kubelet[2190]: E0625 18:44:31.694137 2190 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 18:44:31.726343 kubelet[2190]: W0625 18:44:31.726185 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:31.726343 kubelet[2190]: E0625 18:44:31.726254 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:32.143016 kubelet[2190]: W0625 18:44:32.142955 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:32.143016 kubelet[2190]: E0625 18:44:32.143017 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:32.256271 kubelet[2190]: W0625 18:44:32.256172 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:32.256271 kubelet[2190]: E0625 18:44:32.256252 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:32.277157 kubelet[2190]: E0625 18:44:32.276776 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" Jun 25 18:44:32.282518 kubelet[2190]: W0625 18:44:32.282445 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:32.282602 kubelet[2190]: E0625 18:44:32.282536 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:32.380891 kubelet[2190]: I0625 18:44:32.380835 2190 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:32.381304 kubelet[2190]: E0625 18:44:32.381261 2190 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jun 25 18:44:32.420599 kubelet[2190]: I0625 18:44:32.420407 2190 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:44:32.421921 kubelet[2190]: I0625 18:44:32.421885 2190 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:44:32.422687 kubelet[2190]: I0625 18:44:32.422662 2190 topology_manager.go:215] "Topology Admit Handler" podUID="257a0f59739385a6e55cf6b2938bbaf0" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:44:32.429170 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jun 25 18:44:32.463160 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jun 25 18:44:32.490733 kubelet[2190]: I0625 18:44:32.490657 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:32.490733 kubelet[2190]: I0625 18:44:32.490726 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:32.491188 kubelet[2190]: I0625 18:44:32.490753 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:32.491188 kubelet[2190]: I0625 18:44:32.490780 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:44:32.491188 kubelet[2190]: I0625 18:44:32.490805 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/257a0f59739385a6e55cf6b2938bbaf0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"257a0f59739385a6e55cf6b2938bbaf0\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:32.491188 kubelet[2190]: I0625 18:44:32.490825 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/257a0f59739385a6e55cf6b2938bbaf0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"257a0f59739385a6e55cf6b2938bbaf0\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:32.491188 kubelet[2190]: I0625 18:44:32.490859 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:32.491377 kubelet[2190]: I0625 18:44:32.490903 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:32.491377 kubelet[2190]: I0625 18:44:32.490930 2190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/257a0f59739385a6e55cf6b2938bbaf0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"257a0f59739385a6e55cf6b2938bbaf0\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:32.500064 systemd[1]: Created slice kubepods-burstable-pod257a0f59739385a6e55cf6b2938bbaf0.slice - libcontainer container kubepods-burstable-pod257a0f59739385a6e55cf6b2938bbaf0.slice. Jun 25 18:44:32.603073 kubelet[2190]: E0625 18:44:32.602937 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc539830d96a59 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 18:44:30.868376153 +0000 UTC m=+0.700119308,LastTimestamp:2024-06-25 18:44:30.868376153 +0000 UTC m=+0.700119308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 18:44:32.760892 kubelet[2190]: E0625 18:44:32.760834 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:32.761631 containerd[1452]: time="2024-06-25T18:44:32.761587720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:32.797465 kubelet[2190]: E0625 18:44:32.797415 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:32.798071 containerd[1452]: time="2024-06-25T18:44:32.798023146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:32.803489 kubelet[2190]: E0625 18:44:32.803448 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:32.804035 containerd[1452]: time="2024-06-25T18:44:32.803999391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:257a0f59739385a6e55cf6b2938bbaf0,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:32.951072 kubelet[2190]: E0625 18:44:32.951025 2190 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:33.680036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675523092.mount: Deactivated successfully. Jun 25 18:44:33.688104 containerd[1452]: time="2024-06-25T18:44:33.688042844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:33.690010 containerd[1452]: time="2024-06-25T18:44:33.689929178Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:33.690910 containerd[1452]: time="2024-06-25T18:44:33.690860453Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:33.691927 containerd[1452]: time="2024-06-25T18:44:33.691898142Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:33.692887 containerd[1452]: time="2024-06-25T18:44:33.692823531Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:33.693728 containerd[1452]: time="2024-06-25T18:44:33.693675805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:33.694581 containerd[1452]: time="2024-06-25T18:44:33.694550821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:44:33.696345 containerd[1452]: time="2024-06-25T18:44:33.696308140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:33.698140 containerd[1452]: time="2024-06-25T18:44:33.698114731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 936.409635ms" Jun 25 18:44:33.700888 containerd[1452]: time="2024-06-25T18:44:33.699481749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 901.337728ms" Jun 25 18:44:33.704254 containerd[1452]: time="2024-06-25T18:44:33.704226049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 900.127564ms" Jun 25 18:44:33.878286 kubelet[2190]: E0625 18:44:33.878234 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="3.2s" Jun 25 18:44:33.974993 containerd[1452]: time="2024-06-25T18:44:33.974774265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:33.976070 containerd[1452]: time="2024-06-25T18:44:33.975186214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:33.976070 containerd[1452]: time="2024-06-25T18:44:33.975260473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:33.976070 containerd[1452]: time="2024-06-25T18:44:33.975297411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:33.976070 containerd[1452]: time="2024-06-25T18:44:33.975337468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:33.976274 containerd[1452]: time="2024-06-25T18:44:33.975785143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:33.976333 containerd[1452]: time="2024-06-25T18:44:33.976255798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:33.976432 containerd[1452]: time="2024-06-25T18:44:33.976405328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:33.976944 containerd[1452]: time="2024-06-25T18:44:33.976861746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:33.977037 containerd[1452]: time="2024-06-25T18:44:33.976928373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:33.977037 containerd[1452]: time="2024-06-25T18:44:33.976949370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:33.977037 containerd[1452]: time="2024-06-25T18:44:33.976962945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:33.983385 kubelet[2190]: I0625 18:44:33.982992 2190 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:33.983385 kubelet[2190]: E0625 18:44:33.983271 2190 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jun 25 18:44:34.096088 systemd[1]: Started cri-containerd-5e889d8d8864036cb43f47e9a48a199a453b7efdeae267f271900a2a60db67c4.scope - libcontainer container 5e889d8d8864036cb43f47e9a48a199a453b7efdeae267f271900a2a60db67c4. Jun 25 18:44:34.100798 systemd[1]: Started cri-containerd-28e13a12eaf82551709b30dd7d7382da1f54be4290a5680174cee3e6e93be957.scope - libcontainer container 28e13a12eaf82551709b30dd7d7382da1f54be4290a5680174cee3e6e93be957. Jun 25 18:44:34.102504 systemd[1]: Started cri-containerd-2d4960411ccaf1f2e36015e16eb6eb9596009c854b71c3e3258bc1ff6c5bbfa9.scope - libcontainer container 2d4960411ccaf1f2e36015e16eb6eb9596009c854b71c3e3258bc1ff6c5bbfa9. Jun 25 18:44:34.156893 kubelet[2190]: W0625 18:44:34.156822 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:34.156893 kubelet[2190]: E0625 18:44:34.156880 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:34.184352 containerd[1452]: time="2024-06-25T18:44:34.184289968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"28e13a12eaf82551709b30dd7d7382da1f54be4290a5680174cee3e6e93be957\"" Jun 25 18:44:34.184498 containerd[1452]: time="2024-06-25T18:44:34.184315074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:257a0f59739385a6e55cf6b2938bbaf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e889d8d8864036cb43f47e9a48a199a453b7efdeae267f271900a2a60db67c4\"" Jun 25 18:44:34.185667 kubelet[2190]: E0625 18:44:34.185630 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:34.186175 kubelet[2190]: E0625 18:44:34.185970 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:34.188908 containerd[1452]: time="2024-06-25T18:44:34.188814183Z" level=info msg="CreateContainer within sandbox \"5e889d8d8864036cb43f47e9a48a199a453b7efdeae267f271900a2a60db67c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:44:34.189365 containerd[1452]: time="2024-06-25T18:44:34.189308191Z" level=info msg="CreateContainer within sandbox \"28e13a12eaf82551709b30dd7d7382da1f54be4290a5680174cee3e6e93be957\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:44:34.214920 containerd[1452]: time="2024-06-25T18:44:34.214837121Z" level=info msg="CreateContainer within sandbox \"5e889d8d8864036cb43f47e9a48a199a453b7efdeae267f271900a2a60db67c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6882d5f26d6e29896b6372b6015f3601286457940477c68d53c8d63737b7db32\"" Jun 25 18:44:34.215440 containerd[1452]: time="2024-06-25T18:44:34.215407289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d4960411ccaf1f2e36015e16eb6eb9596009c854b71c3e3258bc1ff6c5bbfa9\"" Jun 25 18:44:34.216018 containerd[1452]: time="2024-06-25T18:44:34.215960517Z" level=info msg="StartContainer for \"6882d5f26d6e29896b6372b6015f3601286457940477c68d53c8d63737b7db32\"" Jun 25 18:44:34.219461 containerd[1452]: time="2024-06-25T18:44:34.219421466Z" level=info msg="CreateContainer within sandbox \"28e13a12eaf82551709b30dd7d7382da1f54be4290a5680174cee3e6e93be957\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70a80cb22b4d7f20caf16d015c1721aa5408faa886484d1dac130bcd5d6c6881\"" Jun 25 18:44:34.219794 kubelet[2190]: E0625 18:44:34.219713 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:34.220199 containerd[1452]: time="2024-06-25T18:44:34.220093151Z" level=info msg="StartContainer for \"70a80cb22b4d7f20caf16d015c1721aa5408faa886484d1dac130bcd5d6c6881\"" Jun 25 18:44:34.221787 containerd[1452]: time="2024-06-25T18:44:34.221753625Z" level=info msg="CreateContainer within sandbox \"2d4960411ccaf1f2e36015e16eb6eb9596009c854b71c3e3258bc1ff6c5bbfa9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:44:34.241486 containerd[1452]: time="2024-06-25T18:44:34.241400948Z" level=info msg="CreateContainer within sandbox \"2d4960411ccaf1f2e36015e16eb6eb9596009c854b71c3e3258bc1ff6c5bbfa9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb8876a14a2eac33ef4160dbdbf03f0059ddeaa56028341fbb129c431d742306\"" Jun 25 18:44:34.241938 containerd[1452]: time="2024-06-25T18:44:34.241902711Z" level=info msg="StartContainer for \"cb8876a14a2eac33ef4160dbdbf03f0059ddeaa56028341fbb129c431d742306\"" Jun 25 18:44:34.252126 systemd[1]: Started cri-containerd-6882d5f26d6e29896b6372b6015f3601286457940477c68d53c8d63737b7db32.scope - libcontainer container 6882d5f26d6e29896b6372b6015f3601286457940477c68d53c8d63737b7db32. Jun 25 18:44:34.254448 systemd[1]: Started cri-containerd-70a80cb22b4d7f20caf16d015c1721aa5408faa886484d1dac130bcd5d6c6881.scope - libcontainer container 70a80cb22b4d7f20caf16d015c1721aa5408faa886484d1dac130bcd5d6c6881. Jun 25 18:44:34.285034 systemd[1]: Started cri-containerd-cb8876a14a2eac33ef4160dbdbf03f0059ddeaa56028341fbb129c431d742306.scope - libcontainer container cb8876a14a2eac33ef4160dbdbf03f0059ddeaa56028341fbb129c431d742306. Jun 25 18:44:34.289901 kubelet[2190]: W0625 18:44:34.289819 2190 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:34.289901 kubelet[2190]: E0625 18:44:34.289907 2190 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jun 25 18:44:34.370473 containerd[1452]: time="2024-06-25T18:44:34.370220291Z" level=info msg="StartContainer for \"6882d5f26d6e29896b6372b6015f3601286457940477c68d53c8d63737b7db32\" returns successfully" Jun 25 18:44:34.370473 containerd[1452]: time="2024-06-25T18:44:34.370382089Z" level=info msg="StartContainer for \"70a80cb22b4d7f20caf16d015c1721aa5408faa886484d1dac130bcd5d6c6881\" returns successfully" Jun 25 18:44:34.370473 containerd[1452]: time="2024-06-25T18:44:34.370411126Z" level=info msg="StartContainer for \"cb8876a14a2eac33ef4160dbdbf03f0059ddeaa56028341fbb129c431d742306\" returns successfully" Jun 25 18:44:34.935294 kubelet[2190]: E0625 18:44:34.935113 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:34.937421 kubelet[2190]: E0625 18:44:34.937172 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:34.939068 kubelet[2190]: E0625 18:44:34.939036 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:35.943479 kubelet[2190]: E0625 18:44:35.943439 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:36.419028 kubelet[2190]: E0625 18:44:36.418983 2190 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 18:44:36.779012 kubelet[2190]: E0625 18:44:36.778960 2190 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 18:44:37.082022 kubelet[2190]: E0625 18:44:37.081850 2190 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 18:44:37.191891 kubelet[2190]: I0625 18:44:37.191832 2190 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:37.199152 kubelet[2190]: I0625 18:44:37.199112 2190 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:44:37.205528 kubelet[2190]: E0625 18:44:37.205497 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:37.306483 kubelet[2190]: E0625 18:44:37.306436 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:37.407733 kubelet[2190]: E0625 18:44:37.407562 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:37.508279 kubelet[2190]: E0625 18:44:37.508218 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:37.608893 kubelet[2190]: E0625 18:44:37.608804 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:37.709788 kubelet[2190]: E0625 18:44:37.709651 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:37.810471 kubelet[2190]: E0625 18:44:37.810410 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:37.910819 kubelet[2190]: E0625 18:44:37.910758 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.011698 kubelet[2190]: E0625 18:44:38.011660 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.112367 kubelet[2190]: E0625 18:44:38.112316 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.203507 kubelet[2190]: E0625 18:44:38.203472 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:38.213453 kubelet[2190]: E0625 18:44:38.213416 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.314203 kubelet[2190]: E0625 18:44:38.314053 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.414539 kubelet[2190]: E0625 18:44:38.414484 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.472256 systemd[1]: Reloading requested from client PID 2470 ('systemctl') (unit session-7.scope)... Jun 25 18:44:38.472287 systemd[1]: Reloading... Jun 25 18:44:38.515127 kubelet[2190]: E0625 18:44:38.515088 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.558948 zram_generator::config[2510]: No configuration found. Jun 25 18:44:38.615688 kubelet[2190]: E0625 18:44:38.615572 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.683934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:38.716121 kubelet[2190]: E0625 18:44:38.716087 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.777053 systemd[1]: Reloading finished in 304 ms. Jun 25 18:44:38.816911 kubelet[2190]: E0625 18:44:38.816856 2190 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 18:44:38.826407 kubelet[2190]: I0625 18:44:38.826315 2190 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:38.826399 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:38.849880 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:38.850231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:38.850296 systemd[1]: kubelet.service: Consumed 1.404s CPU time, 116.5M memory peak, 0B memory swap peak. Jun 25 18:44:38.856359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:39.013737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:39.019071 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:39.073716 kubelet[2552]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:39.073716 kubelet[2552]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:39.073716 kubelet[2552]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:39.074210 kubelet[2552]: I0625 18:44:39.073762 2552 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:39.079248 kubelet[2552]: I0625 18:44:39.079222 2552 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 18:44:39.079248 kubelet[2552]: I0625 18:44:39.079240 2552 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:39.079435 kubelet[2552]: I0625 18:44:39.079416 2552 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 18:44:39.080699 kubelet[2552]: I0625 18:44:39.080678 2552 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:44:39.081882 kubelet[2552]: I0625 18:44:39.081828 2552 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:39.089396 kubelet[2552]: I0625 18:44:39.089363 2552 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:39.089619 kubelet[2552]: I0625 18:44:39.089577 2552 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:39.089779 kubelet[2552]: I0625 18:44:39.089605 2552 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:39.089899 kubelet[2552]: I0625 18:44:39.089793 2552 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:39.089899 kubelet[2552]: I0625 18:44:39.089802 2552 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:39.089899 kubelet[2552]: I0625 18:44:39.089853 2552 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:39.089985 kubelet[2552]: I0625 18:44:39.089947 2552 kubelet.go:400] "Attempting to sync node with API server" Jun 25 18:44:39.089985 kubelet[2552]: I0625 18:44:39.089958 2552 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:39.089985 kubelet[2552]: I0625 18:44:39.089976 2552 kubelet.go:312] "Adding apiserver pod source" Jun 25 18:44:39.090048 kubelet[2552]: I0625 18:44:39.089993 2552 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:39.095892 kubelet[2552]: I0625 18:44:39.094129 2552 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:39.095892 kubelet[2552]: I0625 18:44:39.094440 2552 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 18:44:39.095892 kubelet[2552]: I0625 18:44:39.095147 2552 server.go:1264] "Started kubelet" Jun 25 18:44:39.095892 kubelet[2552]: I0625 18:44:39.095482 2552 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:39.096144 kubelet[2552]: I0625 18:44:39.096053 2552 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 18:44:39.096901 kubelet[2552]: I0625 18:44:39.096877 2552 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:39.100327 kubelet[2552]: I0625 18:44:39.099273 2552 server.go:455] "Adding debug handlers to kubelet server" Jun 25 18:44:39.100327 kubelet[2552]: I0625 18:44:39.099577 2552 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:39.104598 kubelet[2552]: I0625 18:44:39.104568 2552 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:39.104690 kubelet[2552]: I0625 18:44:39.104663 2552 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 18:44:39.104832 kubelet[2552]: I0625 18:44:39.104806 2552 reconciler.go:26] "Reconciler: start to sync state" Jun 25 18:44:39.106016 kubelet[2552]: I0625 18:44:39.105995 2552 factory.go:221] Registration of the systemd container factory successfully Jun 25 18:44:39.106122 kubelet[2552]: I0625 18:44:39.106085 2552 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 18:44:39.106664 kubelet[2552]: E0625 18:44:39.106627 2552 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:39.107311 kubelet[2552]: I0625 18:44:39.107289 2552 factory.go:221] Registration of the containerd container factory successfully Jun 25 18:44:39.114889 kubelet[2552]: I0625 18:44:39.114795 2552 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:39.116420 kubelet[2552]: I0625 18:44:39.116384 2552 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:39.116489 kubelet[2552]: I0625 18:44:39.116427 2552 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:39.116489 kubelet[2552]: I0625 18:44:39.116452 2552 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 18:44:39.116554 kubelet[2552]: E0625 18:44:39.116506 2552 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:39.183743 kubelet[2552]: I0625 18:44:39.183707 2552 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:39.183743 kubelet[2552]: I0625 18:44:39.183726 2552 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:39.183743 kubelet[2552]: I0625 18:44:39.183750 2552 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:39.183991 kubelet[2552]: I0625 18:44:39.183947 2552 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:44:39.183991 kubelet[2552]: I0625 18:44:39.183957 2552 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:44:39.183991 kubelet[2552]: I0625 18:44:39.183980 2552 policy_none.go:49] "None policy: Start" Jun 25 18:44:39.184553 kubelet[2552]: I0625 18:44:39.184508 2552 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 18:44:39.184553 kubelet[2552]: I0625 18:44:39.184541 2552 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:39.184781 kubelet[2552]: I0625 18:44:39.184755 2552 state_mem.go:75] "Updated machine memory state" Jun 25 18:44:39.189269 kubelet[2552]: I0625 18:44:39.189238 2552 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:39.189636 kubelet[2552]: I0625 18:44:39.189464 2552 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 18:44:39.189676 kubelet[2552]: I0625 18:44:39.189646 2552 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:39.217736 kubelet[2552]: I0625 18:44:39.217650 2552 topology_manager.go:215] "Topology Admit Handler" podUID="257a0f59739385a6e55cf6b2938bbaf0" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 18:44:39.217848 kubelet[2552]: I0625 18:44:39.217820 2552 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 18:44:39.218020 kubelet[2552]: I0625 18:44:39.218001 2552 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 18:44:39.297374 kubelet[2552]: I0625 18:44:39.296587 2552 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 18:44:39.303844 kubelet[2552]: I0625 18:44:39.303809 2552 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 18:44:39.304027 kubelet[2552]: I0625 18:44:39.303922 2552 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 18:44:39.305111 kubelet[2552]: I0625 18:44:39.305072 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/257a0f59739385a6e55cf6b2938bbaf0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"257a0f59739385a6e55cf6b2938bbaf0\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:39.305201 kubelet[2552]: I0625 18:44:39.305117 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/257a0f59739385a6e55cf6b2938bbaf0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"257a0f59739385a6e55cf6b2938bbaf0\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:39.305201 kubelet[2552]: I0625 18:44:39.305141 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:39.305201 kubelet[2552]: I0625 18:44:39.305158 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:39.305201 kubelet[2552]: I0625 18:44:39.305172 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:39.305201 kubelet[2552]: I0625 18:44:39.305198 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 18:44:39.305479 kubelet[2552]: I0625 18:44:39.305219 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:39.305479 kubelet[2552]: I0625 18:44:39.305235 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 18:44:39.305479 kubelet[2552]: I0625 18:44:39.305270 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/257a0f59739385a6e55cf6b2938bbaf0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"257a0f59739385a6e55cf6b2938bbaf0\") " pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:39.527947 kubelet[2552]: E0625 18:44:39.527910 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:39.528096 kubelet[2552]: E0625 18:44:39.527913 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:39.528096 kubelet[2552]: E0625 18:44:39.528027 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.098344 kubelet[2552]: I0625 18:44:40.097418 2552 apiserver.go:52] "Watching apiserver" Jun 25 18:44:40.107574 kubelet[2552]: I0625 18:44:40.105750 2552 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 18:44:40.122924 kubelet[2552]: I0625 18:44:40.121686 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.121653339 podStartE2EDuration="1.121653339s" podCreationTimestamp="2024-06-25 18:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:40.121344028 +0000 UTC m=+1.097570419" watchObservedRunningTime="2024-06-25 18:44:40.121653339 +0000 UTC m=+1.097879730" Jun 25 18:44:40.167563 kubelet[2552]: E0625 18:44:40.167520 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.167776 kubelet[2552]: E0625 18:44:40.167730 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.360939 kubelet[2552]: E0625 18:44:40.357032 2552 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 18:44:40.360939 kubelet[2552]: E0625 18:44:40.358456 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:40.420472 kubelet[2552]: I0625 18:44:40.419754 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.419730141 podStartE2EDuration="1.419730141s" podCreationTimestamp="2024-06-25 18:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:40.317157378 +0000 UTC m=+1.293383769" watchObservedRunningTime="2024-06-25 18:44:40.419730141 +0000 UTC m=+1.395956532" Jun 25 18:44:40.438621 kubelet[2552]: I0625 18:44:40.437455 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4374349 podStartE2EDuration="1.4374349s" podCreationTimestamp="2024-06-25 18:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:40.426228224 +0000 UTC m=+1.402454615" watchObservedRunningTime="2024-06-25 18:44:40.4374349 +0000 UTC m=+1.413661291" Jun 25 18:44:41.168306 kubelet[2552]: E0625 18:44:41.168260 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:41.174598 kubelet[2552]: E0625 18:44:41.174563 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:45.622747 sudo[1635]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:45.625109 sshd[1632]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:45.629657 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:55832.service: Deactivated successfully. Jun 25 18:44:45.632182 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:44:45.632415 systemd[1]: session-7.scope: Consumed 4.639s CPU time, 141.3M memory peak, 0B memory swap peak. Jun 25 18:44:45.632953 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:44:45.634223 systemd-logind[1433]: Removed session 7. Jun 25 18:44:46.434884 kubelet[2552]: E0625 18:44:46.434839 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:47.176805 kubelet[2552]: E0625 18:44:47.176774 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:47.868503 update_engine[1437]: I0625 18:44:47.868382 1437 update_attempter.cc:509] Updating boot flags... Jun 25 18:44:47.869574 kubelet[2552]: E0625 18:44:47.868571 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:47.896946 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2651) Jun 25 18:44:47.928912 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2654) Jun 25 18:44:47.966900 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2654) Jun 25 18:44:48.178699 kubelet[2552]: E0625 18:44:48.178596 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:51.179006 kubelet[2552]: E0625 18:44:51.178973 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:53.962338 kubelet[2552]: I0625 18:44:53.962287 2552 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:44:53.962906 containerd[1452]: time="2024-06-25T18:44:53.962782767Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:44:53.963169 kubelet[2552]: I0625 18:44:53.963010 2552 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:44:54.691910 kubelet[2552]: I0625 18:44:54.691783 2552 topology_manager.go:215] "Topology Admit Handler" podUID="379c6596-b2a9-4fc6-a26c-775b2986686f" podNamespace="kube-system" podName="kube-proxy-bwxd4" Jun 25 18:44:54.702500 kubelet[2552]: I0625 18:44:54.702441 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/379c6596-b2a9-4fc6-a26c-775b2986686f-kube-proxy\") pod \"kube-proxy-bwxd4\" (UID: \"379c6596-b2a9-4fc6-a26c-775b2986686f\") " pod="kube-system/kube-proxy-bwxd4" Jun 25 18:44:54.702500 kubelet[2552]: I0625 18:44:54.702496 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/379c6596-b2a9-4fc6-a26c-775b2986686f-xtables-lock\") pod \"kube-proxy-bwxd4\" (UID: \"379c6596-b2a9-4fc6-a26c-775b2986686f\") " pod="kube-system/kube-proxy-bwxd4" Jun 25 18:44:54.702723 kubelet[2552]: I0625 18:44:54.702517 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/379c6596-b2a9-4fc6-a26c-775b2986686f-lib-modules\") pod \"kube-proxy-bwxd4\" (UID: \"379c6596-b2a9-4fc6-a26c-775b2986686f\") " pod="kube-system/kube-proxy-bwxd4" Jun 25 18:44:54.702723 kubelet[2552]: I0625 18:44:54.702537 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hk7\" (UniqueName: \"kubernetes.io/projected/379c6596-b2a9-4fc6-a26c-775b2986686f-kube-api-access-t6hk7\") pod \"kube-proxy-bwxd4\" (UID: \"379c6596-b2a9-4fc6-a26c-775b2986686f\") " pod="kube-system/kube-proxy-bwxd4" Jun 25 18:44:54.703358 systemd[1]: Created slice kubepods-besteffort-pod379c6596_b2a9_4fc6_a26c_775b2986686f.slice - libcontainer container kubepods-besteffort-pod379c6596_b2a9_4fc6_a26c_775b2986686f.slice. Jun 25 18:44:54.796297 kubelet[2552]: I0625 18:44:54.796208 2552 topology_manager.go:215] "Topology Admit Handler" podUID="bf28bec7-61f6-44e6-ac02-bbc0f1a5314a" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-tlvqc" Jun 25 18:44:54.803355 kubelet[2552]: I0625 18:44:54.803241 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf28bec7-61f6-44e6-ac02-bbc0f1a5314a-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-tlvqc\" (UID: \"bf28bec7-61f6-44e6-ac02-bbc0f1a5314a\") " pod="tigera-operator/tigera-operator-76ff79f7fd-tlvqc" Jun 25 18:44:54.803355 kubelet[2552]: I0625 18:44:54.803321 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg5ll\" (UniqueName: \"kubernetes.io/projected/bf28bec7-61f6-44e6-ac02-bbc0f1a5314a-kube-api-access-jg5ll\") pod \"tigera-operator-76ff79f7fd-tlvqc\" (UID: \"bf28bec7-61f6-44e6-ac02-bbc0f1a5314a\") " pod="tigera-operator/tigera-operator-76ff79f7fd-tlvqc" Jun 25 18:44:54.804361 systemd[1]: Created slice kubepods-besteffort-podbf28bec7_61f6_44e6_ac02_bbc0f1a5314a.slice - libcontainer container kubepods-besteffort-podbf28bec7_61f6_44e6_ac02_bbc0f1a5314a.slice. Jun 25 18:44:55.017661 kubelet[2552]: E0625 18:44:55.017618 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:55.018294 containerd[1452]: time="2024-06-25T18:44:55.018248484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwxd4,Uid:379c6596-b2a9-4fc6-a26c-775b2986686f,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:55.045256 containerd[1452]: time="2024-06-25T18:44:55.045124872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:55.045400 containerd[1452]: time="2024-06-25T18:44:55.045250496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:55.045483 containerd[1452]: time="2024-06-25T18:44:55.045279031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:55.045483 containerd[1452]: time="2024-06-25T18:44:55.045338235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:55.078108 systemd[1]: Started cri-containerd-c3eff5877f83523bec79b49c480dd01d25b6dd30b14cf4521b7632e97c7987de.scope - libcontainer container c3eff5877f83523bec79b49c480dd01d25b6dd30b14cf4521b7632e97c7987de. Jun 25 18:44:55.108349 containerd[1452]: time="2024-06-25T18:44:55.108055093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bwxd4,Uid:379c6596-b2a9-4fc6-a26c-775b2986686f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3eff5877f83523bec79b49c480dd01d25b6dd30b14cf4521b7632e97c7987de\"" Jun 25 18:44:55.108555 containerd[1452]: time="2024-06-25T18:44:55.108489156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-tlvqc,Uid:bf28bec7-61f6-44e6-ac02-bbc0f1a5314a,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:44:55.109048 kubelet[2552]: E0625 18:44:55.108961 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:55.111637 containerd[1452]: time="2024-06-25T18:44:55.111573434Z" level=info msg="CreateContainer within sandbox \"c3eff5877f83523bec79b49c480dd01d25b6dd30b14cf4521b7632e97c7987de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:44:55.138438 containerd[1452]: time="2024-06-25T18:44:55.138311248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:55.138591 containerd[1452]: time="2024-06-25T18:44:55.138470409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:55.138591 containerd[1452]: time="2024-06-25T18:44:55.138520211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:55.138591 containerd[1452]: time="2024-06-25T18:44:55.138543423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:55.149064 containerd[1452]: time="2024-06-25T18:44:55.148992990Z" level=info msg="CreateContainer within sandbox \"c3eff5877f83523bec79b49c480dd01d25b6dd30b14cf4521b7632e97c7987de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8cb14ce2c285097934f62c1e2c6d325ffd05ffe00dbc4f147553e08a68bffa24\"" Jun 25 18:44:55.150772 containerd[1452]: time="2024-06-25T18:44:55.150252610Z" level=info msg="StartContainer for \"8cb14ce2c285097934f62c1e2c6d325ffd05ffe00dbc4f147553e08a68bffa24\"" Jun 25 18:44:55.163180 systemd[1]: Started cri-containerd-67618c1c4eb2957e14e9d4ec0c2fc1539b756296b07e2ff1bc34744cf9c12249.scope - libcontainer container 67618c1c4eb2957e14e9d4ec0c2fc1539b756296b07e2ff1bc34744cf9c12249. Jun 25 18:44:55.185049 systemd[1]: Started cri-containerd-8cb14ce2c285097934f62c1e2c6d325ffd05ffe00dbc4f147553e08a68bffa24.scope - libcontainer container 8cb14ce2c285097934f62c1e2c6d325ffd05ffe00dbc4f147553e08a68bffa24. Jun 25 18:44:55.212250 containerd[1452]: time="2024-06-25T18:44:55.212203465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-tlvqc,Uid:bf28bec7-61f6-44e6-ac02-bbc0f1a5314a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"67618c1c4eb2957e14e9d4ec0c2fc1539b756296b07e2ff1bc34744cf9c12249\"" Jun 25 18:44:55.215589 containerd[1452]: time="2024-06-25T18:44:55.215414060Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:44:55.233329 containerd[1452]: time="2024-06-25T18:44:55.233280339Z" level=info msg="StartContainer for \"8cb14ce2c285097934f62c1e2c6d325ffd05ffe00dbc4f147553e08a68bffa24\" returns successfully" Jun 25 18:44:56.193850 kubelet[2552]: E0625 18:44:56.193796 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:56.615984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157008080.mount: Deactivated successfully. Jun 25 18:44:57.091985 containerd[1452]: time="2024-06-25T18:44:57.091917045Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:57.093166 containerd[1452]: time="2024-06-25T18:44:57.093080751Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076080" Jun 25 18:44:57.094748 containerd[1452]: time="2024-06-25T18:44:57.094709717Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:57.098323 containerd[1452]: time="2024-06-25T18:44:57.098214407Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:57.099321 containerd[1452]: time="2024-06-25T18:44:57.099265001Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.88381113s" Jun 25 18:44:57.099377 containerd[1452]: time="2024-06-25T18:44:57.099322719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:44:57.101912 containerd[1452]: time="2024-06-25T18:44:57.101854639Z" level=info msg="CreateContainer within sandbox \"67618c1c4eb2957e14e9d4ec0c2fc1539b756296b07e2ff1bc34744cf9c12249\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:44:57.115966 containerd[1452]: time="2024-06-25T18:44:57.115910710Z" level=info msg="CreateContainer within sandbox \"67618c1c4eb2957e14e9d4ec0c2fc1539b756296b07e2ff1bc34744cf9c12249\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b2a6017161496242ad500512f780579139b0d846b1b0773db66f319c960e2607\"" Jun 25 18:44:57.116373 containerd[1452]: time="2024-06-25T18:44:57.116352607Z" level=info msg="StartContainer for \"b2a6017161496242ad500512f780579139b0d846b1b0773db66f319c960e2607\"" Jun 25 18:44:57.152043 systemd[1]: Started cri-containerd-b2a6017161496242ad500512f780579139b0d846b1b0773db66f319c960e2607.scope - libcontainer container b2a6017161496242ad500512f780579139b0d846b1b0773db66f319c960e2607. Jun 25 18:44:57.248116 containerd[1452]: time="2024-06-25T18:44:57.248042397Z" level=info msg="StartContainer for \"b2a6017161496242ad500512f780579139b0d846b1b0773db66f319c960e2607\" returns successfully" Jun 25 18:44:57.250057 kubelet[2552]: E0625 18:44:57.250022 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:44:58.260169 kubelet[2552]: I0625 18:44:58.259976 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bwxd4" podStartSLOduration=4.259954853 podStartE2EDuration="4.259954853s" podCreationTimestamp="2024-06-25 18:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:56.201722122 +0000 UTC m=+17.177948513" watchObservedRunningTime="2024-06-25 18:44:58.259954853 +0000 UTC m=+19.236181244" Jun 25 18:45:00.412371 kubelet[2552]: I0625 18:45:00.412298 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-tlvqc" podStartSLOduration=4.525684083 podStartE2EDuration="6.412278562s" podCreationTimestamp="2024-06-25 18:44:54 +0000 UTC" firstStartedPulling="2024-06-25 18:44:55.213619509 +0000 UTC m=+16.189845900" lastFinishedPulling="2024-06-25 18:44:57.100213987 +0000 UTC m=+18.076440379" observedRunningTime="2024-06-25 18:44:58.260274712 +0000 UTC m=+19.236501103" watchObservedRunningTime="2024-06-25 18:45:00.412278562 +0000 UTC m=+21.388504953" Jun 25 18:45:00.412951 kubelet[2552]: I0625 18:45:00.412466 2552 topology_manager.go:215] "Topology Admit Handler" podUID="2edab6eb-0cd7-45e9-944c-9a6a2d2fb536" podNamespace="calico-system" podName="calico-typha-65b4d99b58-g9jj7" Jun 25 18:45:00.420276 systemd[1]: Created slice kubepods-besteffort-pod2edab6eb_0cd7_45e9_944c_9a6a2d2fb536.slice - libcontainer container kubepods-besteffort-pod2edab6eb_0cd7_45e9_944c_9a6a2d2fb536.slice. Jun 25 18:45:00.440988 kubelet[2552]: I0625 18:45:00.440940 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2edab6eb-0cd7-45e9-944c-9a6a2d2fb536-tigera-ca-bundle\") pod \"calico-typha-65b4d99b58-g9jj7\" (UID: \"2edab6eb-0cd7-45e9-944c-9a6a2d2fb536\") " pod="calico-system/calico-typha-65b4d99b58-g9jj7" Jun 25 18:45:00.440988 kubelet[2552]: I0625 18:45:00.440989 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4nt\" (UniqueName: \"kubernetes.io/projected/2edab6eb-0cd7-45e9-944c-9a6a2d2fb536-kube-api-access-sx4nt\") pod \"calico-typha-65b4d99b58-g9jj7\" (UID: \"2edab6eb-0cd7-45e9-944c-9a6a2d2fb536\") " pod="calico-system/calico-typha-65b4d99b58-g9jj7" Jun 25 18:45:00.441197 kubelet[2552]: I0625 18:45:00.441011 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2edab6eb-0cd7-45e9-944c-9a6a2d2fb536-typha-certs\") pod \"calico-typha-65b4d99b58-g9jj7\" (UID: \"2edab6eb-0cd7-45e9-944c-9a6a2d2fb536\") " pod="calico-system/calico-typha-65b4d99b58-g9jj7" Jun 25 18:45:00.505248 kubelet[2552]: I0625 18:45:00.505148 2552 topology_manager.go:215] "Topology Admit Handler" podUID="7709a1b4-f95e-4262-9b8a-c9802fce614b" podNamespace="calico-system" podName="calico-node-p2xpf" Jun 25 18:45:00.513050 systemd[1]: Created slice kubepods-besteffort-pod7709a1b4_f95e_4262_9b8a_c9802fce614b.slice - libcontainer container kubepods-besteffort-pod7709a1b4_f95e_4262_9b8a_c9802fce614b.slice. Jun 25 18:45:00.541743 kubelet[2552]: I0625 18:45:00.541652 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-var-run-calico\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.541997 kubelet[2552]: I0625 18:45:00.541759 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7709a1b4-f95e-4262-9b8a-c9802fce614b-node-certs\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.541997 kubelet[2552]: I0625 18:45:00.541800 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-cni-bin-dir\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.541997 kubelet[2552]: I0625 18:45:00.541827 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-cni-log-dir\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.541997 kubelet[2552]: I0625 18:45:00.541856 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-policysync\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.541997 kubelet[2552]: I0625 18:45:00.541918 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-lib-modules\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.542174 kubelet[2552]: I0625 18:45:00.541941 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7709a1b4-f95e-4262-9b8a-c9802fce614b-tigera-ca-bundle\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.542174 kubelet[2552]: I0625 18:45:00.541962 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-var-lib-calico\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.542174 kubelet[2552]: I0625 18:45:00.541981 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-cni-net-dir\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.542174 kubelet[2552]: I0625 18:45:00.542001 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzr9\" (UniqueName: \"kubernetes.io/projected/7709a1b4-f95e-4262-9b8a-c9802fce614b-kube-api-access-ttzr9\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.542174 kubelet[2552]: I0625 18:45:00.542023 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-xtables-lock\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.542343 kubelet[2552]: I0625 18:45:00.542048 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7709a1b4-f95e-4262-9b8a-c9802fce614b-flexvol-driver-host\") pod \"calico-node-p2xpf\" (UID: \"7709a1b4-f95e-4262-9b8a-c9802fce614b\") " pod="calico-system/calico-node-p2xpf" Jun 25 18:45:00.620694 kubelet[2552]: I0625 18:45:00.620358 2552 topology_manager.go:215] "Topology Admit Handler" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" podNamespace="calico-system" podName="csi-node-driver-fnjft" Jun 25 18:45:00.620694 kubelet[2552]: E0625 18:45:00.620656 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:00.643902 kubelet[2552]: I0625 18:45:00.643309 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ec7b6d3-70f6-438a-9640-5d7339271cb9-socket-dir\") pod \"csi-node-driver-fnjft\" (UID: \"3ec7b6d3-70f6-438a-9640-5d7339271cb9\") " pod="calico-system/csi-node-driver-fnjft" Jun 25 18:45:00.643902 kubelet[2552]: I0625 18:45:00.643376 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3ec7b6d3-70f6-438a-9640-5d7339271cb9-varrun\") pod \"csi-node-driver-fnjft\" (UID: \"3ec7b6d3-70f6-438a-9640-5d7339271cb9\") " pod="calico-system/csi-node-driver-fnjft" Jun 25 18:45:00.643902 kubelet[2552]: I0625 18:45:00.643411 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec7b6d3-70f6-438a-9640-5d7339271cb9-kubelet-dir\") pod \"csi-node-driver-fnjft\" (UID: \"3ec7b6d3-70f6-438a-9640-5d7339271cb9\") " pod="calico-system/csi-node-driver-fnjft" Jun 25 18:45:00.643902 kubelet[2552]: I0625 18:45:00.643433 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwwt\" (UniqueName: \"kubernetes.io/projected/3ec7b6d3-70f6-438a-9640-5d7339271cb9-kube-api-access-gzwwt\") pod \"csi-node-driver-fnjft\" (UID: \"3ec7b6d3-70f6-438a-9640-5d7339271cb9\") " pod="calico-system/csi-node-driver-fnjft" Jun 25 18:45:00.643902 kubelet[2552]: I0625 18:45:00.643480 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ec7b6d3-70f6-438a-9640-5d7339271cb9-registration-dir\") pod \"csi-node-driver-fnjft\" (UID: \"3ec7b6d3-70f6-438a-9640-5d7339271cb9\") " pod="calico-system/csi-node-driver-fnjft" Jun 25 18:45:00.650965 kubelet[2552]: E0625 18:45:00.650934 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.651997 kubelet[2552]: W0625 18:45:00.651970 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.652057 kubelet[2552]: E0625 18:45:00.652010 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.652358 kubelet[2552]: E0625 18:45:00.652339 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.652402 kubelet[2552]: W0625 18:45:00.652357 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.652402 kubelet[2552]: E0625 18:45:00.652369 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.726462 kubelet[2552]: E0625 18:45:00.726181 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:00.727573 containerd[1452]: time="2024-06-25T18:45:00.727516392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b4d99b58-g9jj7,Uid:2edab6eb-0cd7-45e9-944c-9a6a2d2fb536,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:00.745187 kubelet[2552]: E0625 18:45:00.745127 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.745187 kubelet[2552]: W0625 18:45:00.745158 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.745187 kubelet[2552]: E0625 18:45:00.745180 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.745477 kubelet[2552]: E0625 18:45:00.745444 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.745477 kubelet[2552]: W0625 18:45:00.745459 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.745477 kubelet[2552]: E0625 18:45:00.745474 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.745848 kubelet[2552]: E0625 18:45:00.745807 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.746004 kubelet[2552]: W0625 18:45:00.745848 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.746004 kubelet[2552]: E0625 18:45:00.745907 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.746172 kubelet[2552]: E0625 18:45:00.746147 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.746172 kubelet[2552]: W0625 18:45:00.746159 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.746234 kubelet[2552]: E0625 18:45:00.746177 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.746402 kubelet[2552]: E0625 18:45:00.746386 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.746402 kubelet[2552]: W0625 18:45:00.746396 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.746483 kubelet[2552]: E0625 18:45:00.746410 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.746648 kubelet[2552]: E0625 18:45:00.746620 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.746648 kubelet[2552]: W0625 18:45:00.746635 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.746712 kubelet[2552]: E0625 18:45:00.746651 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.746955 kubelet[2552]: E0625 18:45:00.746940 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.746955 kubelet[2552]: W0625 18:45:00.746952 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.747042 kubelet[2552]: E0625 18:45:00.746993 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.747412 kubelet[2552]: E0625 18:45:00.747391 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.747412 kubelet[2552]: W0625 18:45:00.747402 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.747552 kubelet[2552]: E0625 18:45:00.747489 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.747667 kubelet[2552]: E0625 18:45:00.747626 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.747667 kubelet[2552]: W0625 18:45:00.747643 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.747667 kubelet[2552]: E0625 18:45:00.747658 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.747931 kubelet[2552]: E0625 18:45:00.747916 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.747931 kubelet[2552]: W0625 18:45:00.747929 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.748006 kubelet[2552]: E0625 18:45:00.747945 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.748177 kubelet[2552]: E0625 18:45:00.748152 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.748177 kubelet[2552]: W0625 18:45:00.748163 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.748253 kubelet[2552]: E0625 18:45:00.748180 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.748412 kubelet[2552]: E0625 18:45:00.748399 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.748412 kubelet[2552]: W0625 18:45:00.748410 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.748482 kubelet[2552]: E0625 18:45:00.748436 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.748627 kubelet[2552]: E0625 18:45:00.748614 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.748627 kubelet[2552]: W0625 18:45:00.748625 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.748709 kubelet[2552]: E0625 18:45:00.748653 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.748908 kubelet[2552]: E0625 18:45:00.748895 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.748908 kubelet[2552]: W0625 18:45:00.748906 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.748972 kubelet[2552]: E0625 18:45:00.748934 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.749137 kubelet[2552]: E0625 18:45:00.749124 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.749137 kubelet[2552]: W0625 18:45:00.749134 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.749224 kubelet[2552]: E0625 18:45:00.749148 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.749347 kubelet[2552]: E0625 18:45:00.749334 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.749347 kubelet[2552]: W0625 18:45:00.749345 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.749419 kubelet[2552]: E0625 18:45:00.749358 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.749593 kubelet[2552]: E0625 18:45:00.749580 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.749593 kubelet[2552]: W0625 18:45:00.749591 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.749684 kubelet[2552]: E0625 18:45:00.749604 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.749848 kubelet[2552]: E0625 18:45:00.749811 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.749848 kubelet[2552]: W0625 18:45:00.749823 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.749848 kubelet[2552]: E0625 18:45:00.749846 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.750065 kubelet[2552]: E0625 18:45:00.750052 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.750065 kubelet[2552]: W0625 18:45:00.750063 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.750136 kubelet[2552]: E0625 18:45:00.750089 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.750261 kubelet[2552]: E0625 18:45:00.750249 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.750261 kubelet[2552]: W0625 18:45:00.750259 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.750324 kubelet[2552]: E0625 18:45:00.750282 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.750470 kubelet[2552]: E0625 18:45:00.750458 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.750470 kubelet[2552]: W0625 18:45:00.750468 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.750547 kubelet[2552]: E0625 18:45:00.750494 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.750705 kubelet[2552]: E0625 18:45:00.750689 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.750705 kubelet[2552]: W0625 18:45:00.750701 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.750796 kubelet[2552]: E0625 18:45:00.750730 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.751008 kubelet[2552]: E0625 18:45:00.750994 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.751008 kubelet[2552]: W0625 18:45:00.751006 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.751114 kubelet[2552]: E0625 18:45:00.751019 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.751221 kubelet[2552]: E0625 18:45:00.751208 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.751221 kubelet[2552]: W0625 18:45:00.751219 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.751293 kubelet[2552]: E0625 18:45:00.751229 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.751553 kubelet[2552]: E0625 18:45:00.751536 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.751553 kubelet[2552]: W0625 18:45:00.751549 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.751649 kubelet[2552]: E0625 18:45:00.751558 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.850412 kubelet[2552]: E0625 18:45:00.850364 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.850412 kubelet[2552]: W0625 18:45:00.850391 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.850412 kubelet[2552]: E0625 18:45:00.850416 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.850702 kubelet[2552]: E0625 18:45:00.850643 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.850702 kubelet[2552]: W0625 18:45:00.850652 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.850702 kubelet[2552]: E0625 18:45:00.850662 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.850924 kubelet[2552]: E0625 18:45:00.850909 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.850924 kubelet[2552]: W0625 18:45:00.850920 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.851020 kubelet[2552]: E0625 18:45:00.850931 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.859418 kubelet[2552]: E0625 18:45:00.859380 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.859418 kubelet[2552]: W0625 18:45:00.859408 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.859582 kubelet[2552]: E0625 18:45:00.859441 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.938174 kubelet[2552]: E0625 18:45:00.938061 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.938174 kubelet[2552]: W0625 18:45:00.938106 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.938174 kubelet[2552]: E0625 18:45:00.938139 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.952547 kubelet[2552]: E0625 18:45:00.952501 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.952547 kubelet[2552]: W0625 18:45:00.952528 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.952547 kubelet[2552]: E0625 18:45:00.952552 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:00.959200 kubelet[2552]: E0625 18:45:00.959169 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:00.959200 kubelet[2552]: W0625 18:45:00.959191 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:00.959298 kubelet[2552]: E0625 18:45:00.959212 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:01.088948 containerd[1452]: time="2024-06-25T18:45:01.088505149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:01.088948 containerd[1452]: time="2024-06-25T18:45:01.088602552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:01.088948 containerd[1452]: time="2024-06-25T18:45:01.088628228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:01.088948 containerd[1452]: time="2024-06-25T18:45:01.088645255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:01.107128 systemd[1]: Started cri-containerd-dd9144c0e353c33b68a9c693f94e5e8becdd3b31854c0415af705334b7b23463.scope - libcontainer container dd9144c0e353c33b68a9c693f94e5e8becdd3b31854c0415af705334b7b23463. Jun 25 18:45:01.116828 kubelet[2552]: E0625 18:45:01.116245 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:01.117519 containerd[1452]: time="2024-06-25T18:45:01.117482681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p2xpf,Uid:7709a1b4-f95e-4262-9b8a-c9802fce614b,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:01.158659 containerd[1452]: time="2024-06-25T18:45:01.158614759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b4d99b58-g9jj7,Uid:2edab6eb-0cd7-45e9-944c-9a6a2d2fb536,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd9144c0e353c33b68a9c693f94e5e8becdd3b31854c0415af705334b7b23463\"" Jun 25 18:45:01.160175 kubelet[2552]: E0625 18:45:01.159467 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:01.160766 containerd[1452]: time="2024-06-25T18:45:01.160720126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:45:01.244272 containerd[1452]: time="2024-06-25T18:45:01.244090701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:01.244272 containerd[1452]: time="2024-06-25T18:45:01.244170014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:01.244631 containerd[1452]: time="2024-06-25T18:45:01.244198015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:01.244631 containerd[1452]: time="2024-06-25T18:45:01.244342650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:01.269295 systemd[1]: Started cri-containerd-4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964.scope - libcontainer container 4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964. Jun 25 18:45:01.300976 containerd[1452]: time="2024-06-25T18:45:01.300909529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p2xpf,Uid:7709a1b4-f95e-4262-9b8a-c9802fce614b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964\"" Jun 25 18:45:01.302111 kubelet[2552]: E0625 18:45:01.302064 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:02.117030 kubelet[2552]: E0625 18:45:02.116965 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:04.117774 kubelet[2552]: E0625 18:45:04.117677 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:06.077286 containerd[1452]: time="2024-06-25T18:45:06.077209484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:06.092699 containerd[1452]: time="2024-06-25T18:45:06.092625512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:45:06.106578 containerd[1452]: time="2024-06-25T18:45:06.106518085Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:06.109514 containerd[1452]: time="2024-06-25T18:45:06.109471586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:06.110046 containerd[1452]: time="2024-06-25T18:45:06.110012428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 4.94925435s" Jun 25 18:45:06.110106 containerd[1452]: time="2024-06-25T18:45:06.110046230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:45:06.111007 containerd[1452]: time="2024-06-25T18:45:06.110793282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:45:06.118902 kubelet[2552]: E0625 18:45:06.118829 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:06.123158 containerd[1452]: time="2024-06-25T18:45:06.123121120Z" level=info msg="CreateContainer within sandbox \"dd9144c0e353c33b68a9c693f94e5e8becdd3b31854c0415af705334b7b23463\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:45:06.143973 containerd[1452]: time="2024-06-25T18:45:06.143919083Z" level=info msg="CreateContainer within sandbox \"dd9144c0e353c33b68a9c693f94e5e8becdd3b31854c0415af705334b7b23463\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6b1cfd11a955cd9165bf323fd883570dc2748269056577c46cffef4aeb82ce82\"" Jun 25 18:45:06.144541 containerd[1452]: time="2024-06-25T18:45:06.144494859Z" level=info msg="StartContainer for \"6b1cfd11a955cd9165bf323fd883570dc2748269056577c46cffef4aeb82ce82\"" Jun 25 18:45:06.186028 systemd[1]: Started cri-containerd-6b1cfd11a955cd9165bf323fd883570dc2748269056577c46cffef4aeb82ce82.scope - libcontainer container 6b1cfd11a955cd9165bf323fd883570dc2748269056577c46cffef4aeb82ce82. Jun 25 18:45:06.226306 containerd[1452]: time="2024-06-25T18:45:06.226237488Z" level=info msg="StartContainer for \"6b1cfd11a955cd9165bf323fd883570dc2748269056577c46cffef4aeb82ce82\" returns successfully" Jun 25 18:45:06.272452 kubelet[2552]: E0625 18:45:06.272022 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:06.277339 kubelet[2552]: E0625 18:45:06.277319 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.277458 kubelet[2552]: W0625 18:45:06.277446 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.277563 kubelet[2552]: E0625 18:45:06.277508 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.277815 kubelet[2552]: E0625 18:45:06.277759 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.277815 kubelet[2552]: W0625 18:45:06.277769 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.277815 kubelet[2552]: E0625 18:45:06.277778 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.278240 kubelet[2552]: E0625 18:45:06.278087 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.278240 kubelet[2552]: W0625 18:45:06.278097 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.278240 kubelet[2552]: E0625 18:45:06.278106 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.278450 kubelet[2552]: E0625 18:45:06.278382 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.278450 kubelet[2552]: W0625 18:45:06.278391 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.278450 kubelet[2552]: E0625 18:45:06.278400 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.278774 kubelet[2552]: E0625 18:45:06.278678 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.278774 kubelet[2552]: W0625 18:45:06.278688 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.278774 kubelet[2552]: E0625 18:45:06.278698 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.279072 kubelet[2552]: E0625 18:45:06.278983 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.279072 kubelet[2552]: W0625 18:45:06.278993 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.279072 kubelet[2552]: E0625 18:45:06.279003 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.279333 kubelet[2552]: E0625 18:45:06.279264 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.279333 kubelet[2552]: W0625 18:45:06.279282 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.279333 kubelet[2552]: E0625 18:45:06.279291 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.279635 kubelet[2552]: E0625 18:45:06.279545 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.279635 kubelet[2552]: W0625 18:45:06.279554 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.279635 kubelet[2552]: E0625 18:45:06.279562 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.280033 kubelet[2552]: E0625 18:45:06.279977 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.280033 kubelet[2552]: W0625 18:45:06.279988 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.280033 kubelet[2552]: E0625 18:45:06.279998 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.280324 kubelet[2552]: E0625 18:45:06.280259 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.280324 kubelet[2552]: W0625 18:45:06.280278 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.280324 kubelet[2552]: E0625 18:45:06.280289 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.280642 kubelet[2552]: E0625 18:45:06.280554 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.280642 kubelet[2552]: W0625 18:45:06.280563 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.280642 kubelet[2552]: E0625 18:45:06.280572 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.280894 kubelet[2552]: E0625 18:45:06.280884 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.280992 kubelet[2552]: W0625 18:45:06.280932 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.280992 kubelet[2552]: E0625 18:45:06.280944 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.281259 kubelet[2552]: E0625 18:45:06.281200 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.281259 kubelet[2552]: W0625 18:45:06.281210 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.281259 kubelet[2552]: E0625 18:45:06.281219 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.281593 kubelet[2552]: E0625 18:45:06.281493 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.281593 kubelet[2552]: W0625 18:45:06.281503 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.281593 kubelet[2552]: E0625 18:45:06.281513 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.281850 kubelet[2552]: E0625 18:45:06.281787 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.281850 kubelet[2552]: W0625 18:45:06.281797 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.281850 kubelet[2552]: E0625 18:45:06.281806 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.293157 kubelet[2552]: E0625 18:45:06.293130 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.293477 kubelet[2552]: W0625 18:45:06.293302 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.293477 kubelet[2552]: E0625 18:45:06.293322 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.293727 kubelet[2552]: E0625 18:45:06.293691 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.293783 kubelet[2552]: W0625 18:45:06.293727 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.294098 kubelet[2552]: E0625 18:45:06.293810 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.294169 kubelet[2552]: E0625 18:45:06.294139 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.294196 kubelet[2552]: W0625 18:45:06.294173 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.294196 kubelet[2552]: E0625 18:45:06.294191 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.294421 kubelet[2552]: E0625 18:45:06.294407 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.294421 kubelet[2552]: W0625 18:45:06.294417 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.294489 kubelet[2552]: E0625 18:45:06.294453 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.294721 kubelet[2552]: E0625 18:45:06.294705 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.294721 kubelet[2552]: W0625 18:45:06.294717 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.294780 kubelet[2552]: E0625 18:45:06.294728 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.294965 kubelet[2552]: E0625 18:45:06.294951 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.294965 kubelet[2552]: W0625 18:45:06.294962 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.295019 kubelet[2552]: E0625 18:45:06.294989 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.295788 kubelet[2552]: E0625 18:45:06.295765 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.295788 kubelet[2552]: W0625 18:45:06.295780 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.295894 kubelet[2552]: E0625 18:45:06.295861 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.296082 kubelet[2552]: E0625 18:45:06.296063 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.296082 kubelet[2552]: W0625 18:45:06.296073 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.296159 kubelet[2552]: E0625 18:45:06.296139 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.296344 kubelet[2552]: E0625 18:45:06.296315 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.296344 kubelet[2552]: W0625 18:45:06.296327 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.296344 kubelet[2552]: E0625 18:45:06.296341 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.296562 kubelet[2552]: E0625 18:45:06.296547 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.296562 kubelet[2552]: W0625 18:45:06.296558 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.296618 kubelet[2552]: E0625 18:45:06.296569 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.296787 kubelet[2552]: E0625 18:45:06.296774 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.296818 kubelet[2552]: W0625 18:45:06.296786 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.296818 kubelet[2552]: E0625 18:45:06.296808 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.297107 kubelet[2552]: E0625 18:45:06.297096 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.297141 kubelet[2552]: W0625 18:45:06.297107 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.297141 kubelet[2552]: E0625 18:45:06.297127 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.297468 kubelet[2552]: E0625 18:45:06.297458 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.297468 kubelet[2552]: W0625 18:45:06.297467 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.297526 kubelet[2552]: E0625 18:45:06.297479 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.297674 kubelet[2552]: E0625 18:45:06.297663 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.297674 kubelet[2552]: W0625 18:45:06.297671 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.297726 kubelet[2552]: E0625 18:45:06.297683 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.297928 kubelet[2552]: E0625 18:45:06.297917 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.297928 kubelet[2552]: W0625 18:45:06.297926 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.297996 kubelet[2552]: E0625 18:45:06.297937 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.298145 kubelet[2552]: E0625 18:45:06.298134 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.298145 kubelet[2552]: W0625 18:45:06.298143 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.298198 kubelet[2552]: E0625 18:45:06.298163 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.298398 kubelet[2552]: E0625 18:45:06.298387 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.298398 kubelet[2552]: W0625 18:45:06.298396 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.298455 kubelet[2552]: E0625 18:45:06.298404 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:06.298634 kubelet[2552]: E0625 18:45:06.298623 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:06.298634 kubelet[2552]: W0625 18:45:06.298632 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:06.298674 kubelet[2552]: E0625 18:45:06.298641 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.272394 kubelet[2552]: I0625 18:45:07.272343 2552 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:45:07.273003 kubelet[2552]: E0625 18:45:07.272984 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:07.289898 kubelet[2552]: E0625 18:45:07.289835 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.289898 kubelet[2552]: W0625 18:45:07.289859 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.289898 kubelet[2552]: E0625 18:45:07.289905 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.290181 kubelet[2552]: E0625 18:45:07.290159 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.290181 kubelet[2552]: W0625 18:45:07.290171 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.290181 kubelet[2552]: E0625 18:45:07.290181 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.290430 kubelet[2552]: E0625 18:45:07.290411 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.290430 kubelet[2552]: W0625 18:45:07.290422 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.290494 kubelet[2552]: E0625 18:45:07.290431 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.290650 kubelet[2552]: E0625 18:45:07.290630 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.290650 kubelet[2552]: W0625 18:45:07.290641 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.290650 kubelet[2552]: E0625 18:45:07.290650 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.290862 kubelet[2552]: E0625 18:45:07.290844 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.290862 kubelet[2552]: W0625 18:45:07.290855 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.290954 kubelet[2552]: E0625 18:45:07.290864 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.291079 kubelet[2552]: E0625 18:45:07.291066 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.291079 kubelet[2552]: W0625 18:45:07.291076 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.291156 kubelet[2552]: E0625 18:45:07.291084 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.291293 kubelet[2552]: E0625 18:45:07.291278 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.291293 kubelet[2552]: W0625 18:45:07.291288 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.291354 kubelet[2552]: E0625 18:45:07.291296 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.291566 kubelet[2552]: E0625 18:45:07.291550 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.291566 kubelet[2552]: W0625 18:45:07.291562 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.291647 kubelet[2552]: E0625 18:45:07.291572 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.291787 kubelet[2552]: E0625 18:45:07.291775 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.291787 kubelet[2552]: W0625 18:45:07.291785 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.291851 kubelet[2552]: E0625 18:45:07.291794 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.292018 kubelet[2552]: E0625 18:45:07.292002 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.292018 kubelet[2552]: W0625 18:45:07.292016 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.292106 kubelet[2552]: E0625 18:45:07.292027 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.292249 kubelet[2552]: E0625 18:45:07.292236 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.292249 kubelet[2552]: W0625 18:45:07.292246 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.292321 kubelet[2552]: E0625 18:45:07.292255 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.292707 kubelet[2552]: E0625 18:45:07.292685 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.292707 kubelet[2552]: W0625 18:45:07.292697 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.292784 kubelet[2552]: E0625 18:45:07.292707 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.292950 kubelet[2552]: E0625 18:45:07.292931 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.292950 kubelet[2552]: W0625 18:45:07.292942 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.293018 kubelet[2552]: E0625 18:45:07.292952 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.293284 kubelet[2552]: E0625 18:45:07.293151 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.293284 kubelet[2552]: W0625 18:45:07.293166 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.293284 kubelet[2552]: E0625 18:45:07.293175 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.293520 kubelet[2552]: E0625 18:45:07.293373 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.293520 kubelet[2552]: W0625 18:45:07.293382 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.293520 kubelet[2552]: E0625 18:45:07.293391 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.303943 kubelet[2552]: E0625 18:45:07.303902 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.303943 kubelet[2552]: W0625 18:45:07.303926 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.303943 kubelet[2552]: E0625 18:45:07.303945 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.304246 kubelet[2552]: E0625 18:45:07.304221 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.304246 kubelet[2552]: W0625 18:45:07.304239 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.304293 kubelet[2552]: E0625 18:45:07.304256 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.304548 kubelet[2552]: E0625 18:45:07.304528 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.304548 kubelet[2552]: W0625 18:45:07.304545 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.304609 kubelet[2552]: E0625 18:45:07.304566 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.304981 kubelet[2552]: E0625 18:45:07.304944 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.305006 kubelet[2552]: W0625 18:45:07.304977 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.305041 kubelet[2552]: E0625 18:45:07.305023 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.305264 kubelet[2552]: E0625 18:45:07.305246 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.305264 kubelet[2552]: W0625 18:45:07.305257 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.305378 kubelet[2552]: E0625 18:45:07.305272 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.305511 kubelet[2552]: E0625 18:45:07.305483 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.305511 kubelet[2552]: W0625 18:45:07.305495 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.305554 kubelet[2552]: E0625 18:45:07.305522 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.305801 kubelet[2552]: E0625 18:45:07.305774 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.305801 kubelet[2552]: W0625 18:45:07.305787 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.306020 kubelet[2552]: E0625 18:45:07.305814 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.306020 kubelet[2552]: E0625 18:45:07.306010 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.306020 kubelet[2552]: W0625 18:45:07.306019 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.306108 kubelet[2552]: E0625 18:45:07.306067 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.306227 kubelet[2552]: E0625 18:45:07.306209 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.306227 kubelet[2552]: W0625 18:45:07.306221 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.306281 kubelet[2552]: E0625 18:45:07.306255 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.306453 kubelet[2552]: E0625 18:45:07.306433 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.306453 kubelet[2552]: W0625 18:45:07.306448 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.306556 kubelet[2552]: E0625 18:45:07.306468 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.306736 kubelet[2552]: E0625 18:45:07.306711 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.306736 kubelet[2552]: W0625 18:45:07.306721 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.306736 kubelet[2552]: E0625 18:45:07.306735 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.306948 kubelet[2552]: E0625 18:45:07.306935 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.306948 kubelet[2552]: W0625 18:45:07.306946 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.307016 kubelet[2552]: E0625 18:45:07.306960 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.307208 kubelet[2552]: E0625 18:45:07.307189 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.307208 kubelet[2552]: W0625 18:45:07.307203 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.307286 kubelet[2552]: E0625 18:45:07.307221 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.307462 kubelet[2552]: E0625 18:45:07.307445 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.307462 kubelet[2552]: W0625 18:45:07.307458 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.307539 kubelet[2552]: E0625 18:45:07.307474 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.307762 kubelet[2552]: E0625 18:45:07.307729 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.307762 kubelet[2552]: W0625 18:45:07.307745 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.307762 kubelet[2552]: E0625 18:45:07.307756 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.308029 kubelet[2552]: E0625 18:45:07.308011 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.308029 kubelet[2552]: W0625 18:45:07.308024 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.308110 kubelet[2552]: E0625 18:45:07.308040 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.308267 kubelet[2552]: E0625 18:45:07.308251 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.308267 kubelet[2552]: W0625 18:45:07.308263 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.308349 kubelet[2552]: E0625 18:45:07.308278 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.308513 kubelet[2552]: E0625 18:45:07.308485 2552 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.308513 kubelet[2552]: W0625 18:45:07.308502 2552 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.308573 kubelet[2552]: E0625 18:45:07.308522 2552 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.953659 containerd[1452]: time="2024-06-25T18:45:07.953597718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:07.954546 containerd[1452]: time="2024-06-25T18:45:07.954480821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:45:07.955687 containerd[1452]: time="2024-06-25T18:45:07.955624428Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:07.957953 containerd[1452]: time="2024-06-25T18:45:07.957918206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:07.958781 containerd[1452]: time="2024-06-25T18:45:07.958737564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.847912816s" Jun 25 18:45:07.958814 containerd[1452]: time="2024-06-25T18:45:07.958780566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:45:07.960822 containerd[1452]: time="2024-06-25T18:45:07.960793537Z" level=info msg="CreateContainer within sandbox \"4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:45:07.977476 containerd[1452]: time="2024-06-25T18:45:07.977435984Z" level=info msg="CreateContainer within sandbox \"4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab\"" Jun 25 18:45:07.978636 containerd[1452]: time="2024-06-25T18:45:07.977911673Z" level=info msg="StartContainer for \"20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab\"" Jun 25 18:45:08.011169 systemd[1]: Started cri-containerd-20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab.scope - libcontainer container 20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab. Jun 25 18:45:08.044681 containerd[1452]: time="2024-06-25T18:45:08.043770839Z" level=info msg="StartContainer for \"20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab\" returns successfully" Jun 25 18:45:08.057023 systemd[1]: cri-containerd-20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab.scope: Deactivated successfully. Jun 25 18:45:08.117201 kubelet[2552]: E0625 18:45:08.117139 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:08.117400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab-rootfs.mount: Deactivated successfully. Jun 25 18:45:08.276016 kubelet[2552]: E0625 18:45:08.275985 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:08.299202 kubelet[2552]: I0625 18:45:08.299070 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65b4d99b58-g9jj7" podStartSLOduration=3.348804411 podStartE2EDuration="8.299041469s" podCreationTimestamp="2024-06-25 18:45:00 +0000 UTC" firstStartedPulling="2024-06-25 18:45:01.160430615 +0000 UTC m=+22.136657006" lastFinishedPulling="2024-06-25 18:45:06.110667673 +0000 UTC m=+27.086894064" observedRunningTime="2024-06-25 18:45:06.282772223 +0000 UTC m=+27.258998604" watchObservedRunningTime="2024-06-25 18:45:08.299041469 +0000 UTC m=+29.275267861" Jun 25 18:45:08.627914 containerd[1452]: time="2024-06-25T18:45:08.627723704Z" level=info msg="shim disconnected" id=20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab namespace=k8s.io Jun 25 18:45:08.627914 containerd[1452]: time="2024-06-25T18:45:08.627788590Z" level=warning msg="cleaning up after shim disconnected" id=20efad1b74b1943c0b0d1095f7e41eb76b6f2920f9ea21751d9ea55f0c57f0ab namespace=k8s.io Jun 25 18:45:08.627914 containerd[1452]: time="2024-06-25T18:45:08.627810146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:09.279596 kubelet[2552]: E0625 18:45:09.279559 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:09.281164 containerd[1452]: time="2024-06-25T18:45:09.281121001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:45:10.117767 kubelet[2552]: E0625 18:45:10.117685 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:11.247770 kubelet[2552]: I0625 18:45:11.247707 2552 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:45:11.276091 kubelet[2552]: E0625 18:45:11.276028 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:11.284103 kubelet[2552]: E0625 18:45:11.284065 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:11.393262 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:42872.service - OpenSSH per-connection server daemon (10.0.0.1:42872). Jun 25 18:45:11.449713 sshd[3268]: Accepted publickey for core from 10.0.0.1 port 42872 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:11.451545 sshd[3268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:11.458360 systemd-logind[1433]: New session 8 of user core. Jun 25 18:45:11.467016 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:45:11.595474 sshd[3268]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:11.600359 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:42872.service: Deactivated successfully. Jun 25 18:45:11.602569 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:45:11.603231 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:45:11.604253 systemd-logind[1433]: Removed session 8. Jun 25 18:45:12.121534 kubelet[2552]: E0625 18:45:12.121440 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:14.117305 kubelet[2552]: E0625 18:45:14.117237 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:14.385488 containerd[1452]: time="2024-06-25T18:45:14.384637028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.387160 containerd[1452]: time="2024-06-25T18:45:14.387092581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:45:14.389915 containerd[1452]: time="2024-06-25T18:45:14.388724563Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.391647 containerd[1452]: time="2024-06-25T18:45:14.391612505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:14.392732 containerd[1452]: time="2024-06-25T18:45:14.392700196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.111533889s" Jun 25 18:45:14.392813 containerd[1452]: time="2024-06-25T18:45:14.392732794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:45:14.400487 containerd[1452]: time="2024-06-25T18:45:14.400438548Z" level=info msg="CreateContainer within sandbox \"4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:45:14.498557 containerd[1452]: time="2024-06-25T18:45:14.498486752Z" level=info msg="CreateContainer within sandbox \"4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44\"" Jun 25 18:45:14.499153 containerd[1452]: time="2024-06-25T18:45:14.499112112Z" level=info msg="StartContainer for \"4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44\"" Jun 25 18:45:14.550198 systemd[1]: Started cri-containerd-4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44.scope - libcontainer container 4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44. Jun 25 18:45:14.720764 containerd[1452]: time="2024-06-25T18:45:14.720291480Z" level=info msg="StartContainer for \"4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44\" returns successfully" Jun 25 18:45:15.296152 kubelet[2552]: E0625 18:45:15.296100 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:16.116812 kubelet[2552]: E0625 18:45:16.116729 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:16.127467 containerd[1452]: time="2024-06-25T18:45:16.127291955Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:45:16.130481 systemd[1]: cri-containerd-4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44.scope: Deactivated successfully. Jun 25 18:45:16.156606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44-rootfs.mount: Deactivated successfully. Jun 25 18:45:16.157013 kubelet[2552]: I0625 18:45:16.156787 2552 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 18:45:16.196676 kubelet[2552]: I0625 18:45:16.194545 2552 topology_manager.go:215] "Topology Admit Handler" podUID="5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r6cll" Jun 25 18:45:16.196676 kubelet[2552]: I0625 18:45:16.195203 2552 topology_manager.go:215] "Topology Admit Handler" podUID="c6c8846e-3e44-489e-802e-c566bf42ad50" podNamespace="calico-system" podName="calico-kube-controllers-6478596484-t4777" Jun 25 18:45:16.196676 kubelet[2552]: I0625 18:45:16.195363 2552 topology_manager.go:215] "Topology Admit Handler" podUID="0591d1d9-4bc9-42e9-9e91-b48fd33d009e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p6l7c" Jun 25 18:45:16.196940 containerd[1452]: time="2024-06-25T18:45:16.195827047Z" level=info msg="shim disconnected" id=4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44 namespace=k8s.io Jun 25 18:45:16.196940 containerd[1452]: time="2024-06-25T18:45:16.195896560Z" level=warning msg="cleaning up after shim disconnected" id=4715925f82c6992d8af79bcad73397b3960b28948389fce7fc5317cbabf2af44 namespace=k8s.io Jun 25 18:45:16.196940 containerd[1452]: time="2024-06-25T18:45:16.195906100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:16.197949 kubelet[2552]: W0625 18:45:16.197654 2552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 18:45:16.197949 kubelet[2552]: E0625 18:45:16.197714 2552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jun 25 18:45:16.206890 systemd[1]: Created slice kubepods-burstable-pod5885ae5f_9db4_44de_9ed1_fac4dd7f4b3f.slice - libcontainer container kubepods-burstable-pod5885ae5f_9db4_44de_9ed1_fac4dd7f4b3f.slice. Jun 25 18:45:16.213269 systemd[1]: Created slice kubepods-burstable-pod0591d1d9_4bc9_42e9_9e91_b48fd33d009e.slice - libcontainer container kubepods-burstable-pod0591d1d9_4bc9_42e9_9e91_b48fd33d009e.slice. Jun 25 18:45:16.222203 systemd[1]: Created slice kubepods-besteffort-podc6c8846e_3e44_489e_802e_c566bf42ad50.slice - libcontainer container kubepods-besteffort-podc6c8846e_3e44_489e_802e_c566bf42ad50.slice. Jun 25 18:45:16.300064 kubelet[2552]: E0625 18:45:16.300025 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:16.300797 containerd[1452]: time="2024-06-25T18:45:16.300628363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:45:16.368774 kubelet[2552]: I0625 18:45:16.368567 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6c8846e-3e44-489e-802e-c566bf42ad50-tigera-ca-bundle\") pod \"calico-kube-controllers-6478596484-t4777\" (UID: \"c6c8846e-3e44-489e-802e-c566bf42ad50\") " pod="calico-system/calico-kube-controllers-6478596484-t4777" Jun 25 18:45:16.368774 kubelet[2552]: I0625 18:45:16.368617 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jj9\" (UniqueName: \"kubernetes.io/projected/5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f-kube-api-access-76jj9\") pod \"coredns-7db6d8ff4d-r6cll\" (UID: \"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f\") " pod="kube-system/coredns-7db6d8ff4d-r6cll" Jun 25 18:45:16.368774 kubelet[2552]: I0625 18:45:16.368634 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnxx\" (UniqueName: \"kubernetes.io/projected/0591d1d9-4bc9-42e9-9e91-b48fd33d009e-kube-api-access-swnxx\") pod \"coredns-7db6d8ff4d-p6l7c\" (UID: \"0591d1d9-4bc9-42e9-9e91-b48fd33d009e\") " pod="kube-system/coredns-7db6d8ff4d-p6l7c" Jun 25 18:45:16.368774 kubelet[2552]: I0625 18:45:16.368654 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f-config-volume\") pod \"coredns-7db6d8ff4d-r6cll\" (UID: \"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f\") " pod="kube-system/coredns-7db6d8ff4d-r6cll" Jun 25 18:45:16.368774 kubelet[2552]: I0625 18:45:16.368671 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwwl\" (UniqueName: \"kubernetes.io/projected/c6c8846e-3e44-489e-802e-c566bf42ad50-kube-api-access-blwwl\") pod \"calico-kube-controllers-6478596484-t4777\" (UID: \"c6c8846e-3e44-489e-802e-c566bf42ad50\") " pod="calico-system/calico-kube-controllers-6478596484-t4777" Jun 25 18:45:16.369143 kubelet[2552]: I0625 18:45:16.368685 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0591d1d9-4bc9-42e9-9e91-b48fd33d009e-config-volume\") pod \"coredns-7db6d8ff4d-p6l7c\" (UID: \"0591d1d9-4bc9-42e9-9e91-b48fd33d009e\") " pod="kube-system/coredns-7db6d8ff4d-p6l7c" Jun 25 18:45:16.609334 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:55578.service - OpenSSH per-connection server daemon (10.0.0.1:55578). Jun 25 18:45:16.691410 sshd[3357]: Accepted publickey for core from 10.0.0.1 port 55578 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:16.693073 sshd[3357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:16.698007 systemd-logind[1433]: New session 9 of user core. Jun 25 18:45:16.704005 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:45:16.827734 containerd[1452]: time="2024-06-25T18:45:16.827679354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6478596484-t4777,Uid:c6c8846e-3e44-489e-802e-c566bf42ad50,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:16.871078 sshd[3357]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:16.875515 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:55578.service: Deactivated successfully. Jun 25 18:45:16.877789 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:45:16.878487 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:45:16.879453 systemd-logind[1433]: Removed session 9. Jun 25 18:45:17.104073 containerd[1452]: time="2024-06-25T18:45:17.103992488Z" level=error msg="Failed to destroy network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.104612 containerd[1452]: time="2024-06-25T18:45:17.104571784Z" level=error msg="encountered an error cleaning up failed sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.104677 containerd[1452]: time="2024-06-25T18:45:17.104640136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6478596484-t4777,Uid:c6c8846e-3e44-489e-802e-c566bf42ad50,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.111963 kubelet[2552]: E0625 18:45:17.111884 2552 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.112118 kubelet[2552]: E0625 18:45:17.111978 2552 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6478596484-t4777" Jun 25 18:45:17.112118 kubelet[2552]: E0625 18:45:17.112001 2552 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6478596484-t4777" Jun 25 18:45:17.112118 kubelet[2552]: E0625 18:45:17.112051 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6478596484-t4777_calico-system(c6c8846e-3e44-489e-802e-c566bf42ad50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6478596484-t4777_calico-system(c6c8846e-3e44-489e-802e-c566bf42ad50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6478596484-t4777" podUID="c6c8846e-3e44-489e-802e-c566bf42ad50" Jun 25 18:45:17.155971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345-shm.mount: Deactivated successfully. Jun 25 18:45:17.302783 kubelet[2552]: I0625 18:45:17.302742 2552 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:17.303385 containerd[1452]: time="2024-06-25T18:45:17.303355286Z" level=info msg="StopPodSandbox for \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\"" Jun 25 18:45:17.307108 containerd[1452]: time="2024-06-25T18:45:17.307036216Z" level=info msg="Ensure that sandbox e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345 in task-service has been cleanup successfully" Jun 25 18:45:17.336568 containerd[1452]: time="2024-06-25T18:45:17.336503140Z" level=error msg="StopPodSandbox for \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\" failed" error="failed to destroy network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.336851 kubelet[2552]: E0625 18:45:17.336801 2552 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:17.336937 kubelet[2552]: E0625 18:45:17.336882 2552 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345"} Jun 25 18:45:17.336986 kubelet[2552]: E0625 18:45:17.336956 2552 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6c8846e-3e44-489e-802e-c566bf42ad50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:17.337052 kubelet[2552]: E0625 18:45:17.336984 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6c8846e-3e44-489e-802e-c566bf42ad50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6478596484-t4777" podUID="c6c8846e-3e44-489e-802e-c566bf42ad50" Jun 25 18:45:17.410813 kubelet[2552]: E0625 18:45:17.410639 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:17.412055 containerd[1452]: time="2024-06-25T18:45:17.411327107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r6cll,Uid:5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:17.420336 kubelet[2552]: E0625 18:45:17.420275 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:17.420979 containerd[1452]: time="2024-06-25T18:45:17.420935507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p6l7c,Uid:0591d1d9-4bc9-42e9-9e91-b48fd33d009e,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:17.489517 containerd[1452]: time="2024-06-25T18:45:17.489452451Z" level=error msg="Failed to destroy network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.489986 containerd[1452]: time="2024-06-25T18:45:17.489950670Z" level=error msg="encountered an error cleaning up failed sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.490105 containerd[1452]: time="2024-06-25T18:45:17.490007738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r6cll,Uid:5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.490349 kubelet[2552]: E0625 18:45:17.490296 2552 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.490436 kubelet[2552]: E0625 18:45:17.490378 2552 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r6cll" Jun 25 18:45:17.490436 kubelet[2552]: E0625 18:45:17.490405 2552 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-r6cll" Jun 25 18:45:17.490543 kubelet[2552]: E0625 18:45:17.490455 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r6cll_kube-system(5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r6cll_kube-system(5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r6cll" podUID="5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f" Jun 25 18:45:17.491133 containerd[1452]: time="2024-06-25T18:45:17.491091015Z" level=error msg="Failed to destroy network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.491559 containerd[1452]: time="2024-06-25T18:45:17.491535412Z" level=error msg="encountered an error cleaning up failed sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.491608 containerd[1452]: time="2024-06-25T18:45:17.491588101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p6l7c,Uid:0591d1d9-4bc9-42e9-9e91-b48fd33d009e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.491798 kubelet[2552]: E0625 18:45:17.491771 2552 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:17.491841 kubelet[2552]: E0625 18:45:17.491810 2552 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-p6l7c" Jun 25 18:45:17.491841 kubelet[2552]: E0625 18:45:17.491828 2552 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-p6l7c" Jun 25 18:45:17.491909 kubelet[2552]: E0625 18:45:17.491856 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-p6l7c_kube-system(0591d1d9-4bc9-42e9-9e91-b48fd33d009e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-p6l7c_kube-system(0591d1d9-4bc9-42e9-9e91-b48fd33d009e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-p6l7c" podUID="0591d1d9-4bc9-42e9-9e91-b48fd33d009e" Jun 25 18:45:18.123911 systemd[1]: Created slice kubepods-besteffort-pod3ec7b6d3_70f6_438a_9640_5d7339271cb9.slice - libcontainer container kubepods-besteffort-pod3ec7b6d3_70f6_438a_9640_5d7339271cb9.slice. Jun 25 18:45:18.126439 containerd[1452]: time="2024-06-25T18:45:18.126402361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fnjft,Uid:3ec7b6d3-70f6-438a-9640-5d7339271cb9,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:18.156292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075-shm.mount: Deactivated successfully. Jun 25 18:45:18.156436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf-shm.mount: Deactivated successfully. Jun 25 18:45:18.306287 kubelet[2552]: I0625 18:45:18.306120 2552 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:18.307080 containerd[1452]: time="2024-06-25T18:45:18.306786929Z" level=info msg="StopPodSandbox for \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\"" Jun 25 18:45:18.307080 containerd[1452]: time="2024-06-25T18:45:18.307001211Z" level=info msg="Ensure that sandbox ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075 in task-service has been cleanup successfully" Jun 25 18:45:18.307610 kubelet[2552]: I0625 18:45:18.307579 2552 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:18.308607 containerd[1452]: time="2024-06-25T18:45:18.308525993Z" level=info msg="StopPodSandbox for \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\"" Jun 25 18:45:18.308925 containerd[1452]: time="2024-06-25T18:45:18.308774196Z" level=info msg="Ensure that sandbox 1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf in task-service has been cleanup successfully" Jun 25 18:45:18.337453 containerd[1452]: time="2024-06-25T18:45:18.337387033Z" level=error msg="StopPodSandbox for \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\" failed" error="failed to destroy network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:18.338131 kubelet[2552]: E0625 18:45:18.337666 2552 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:18.338131 kubelet[2552]: E0625 18:45:18.337718 2552 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075"} Jun 25 18:45:18.338131 kubelet[2552]: E0625 18:45:18.337761 2552 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0591d1d9-4bc9-42e9-9e91-b48fd33d009e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:18.338131 kubelet[2552]: E0625 18:45:18.337787 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0591d1d9-4bc9-42e9-9e91-b48fd33d009e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-p6l7c" podUID="0591d1d9-4bc9-42e9-9e91-b48fd33d009e" Jun 25 18:45:18.340581 containerd[1452]: time="2024-06-25T18:45:18.340531783Z" level=error msg="StopPodSandbox for \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\" failed" error="failed to destroy network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:18.340749 kubelet[2552]: E0625 18:45:18.340706 2552 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:18.340807 kubelet[2552]: E0625 18:45:18.340748 2552 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf"} Jun 25 18:45:18.340807 kubelet[2552]: E0625 18:45:18.340780 2552 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:18.340925 kubelet[2552]: E0625 18:45:18.340806 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-r6cll" podUID="5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f" Jun 25 18:45:18.569366 containerd[1452]: time="2024-06-25T18:45:18.569303822Z" level=error msg="Failed to destroy network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:18.569898 containerd[1452]: time="2024-06-25T18:45:18.569839927Z" level=error msg="encountered an error cleaning up failed sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:18.569951 containerd[1452]: time="2024-06-25T18:45:18.569921194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fnjft,Uid:3ec7b6d3-70f6-438a-9640-5d7339271cb9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:18.570367 kubelet[2552]: E0625 18:45:18.570202 2552 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:18.570367 kubelet[2552]: E0625 18:45:18.570288 2552 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fnjft" Jun 25 18:45:18.570367 kubelet[2552]: E0625 18:45:18.570316 2552 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fnjft" Jun 25 18:45:18.570501 kubelet[2552]: E0625 18:45:18.570375 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fnjft_calico-system(3ec7b6d3-70f6-438a-9640-5d7339271cb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fnjft_calico-system(3ec7b6d3-70f6-438a-9640-5d7339271cb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:18.571919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e-shm.mount: Deactivated successfully. Jun 25 18:45:19.311088 kubelet[2552]: I0625 18:45:19.311049 2552 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:19.311771 containerd[1452]: time="2024-06-25T18:45:19.311717948Z" level=info msg="StopPodSandbox for \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\"" Jun 25 18:45:19.312107 containerd[1452]: time="2024-06-25T18:45:19.312033427Z" level=info msg="Ensure that sandbox d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e in task-service has been cleanup successfully" Jun 25 18:45:19.342851 containerd[1452]: time="2024-06-25T18:45:19.342790960Z" level=error msg="StopPodSandbox for \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\" failed" error="failed to destroy network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:19.343138 kubelet[2552]: E0625 18:45:19.343070 2552 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:19.343138 kubelet[2552]: E0625 18:45:19.343135 2552 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e"} Jun 25 18:45:19.343138 kubelet[2552]: E0625 18:45:19.343175 2552 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ec7b6d3-70f6-438a-9640-5d7339271cb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:19.343461 kubelet[2552]: E0625 18:45:19.343201 2552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ec7b6d3-70f6-438a-9640-5d7339271cb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fnjft" podUID="3ec7b6d3-70f6-438a-9640-5d7339271cb9" Jun 25 18:45:21.886406 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:55594.service - OpenSSH per-connection server daemon (10.0.0.1:55594). Jun 25 18:45:21.937647 sshd[3615]: Accepted publickey for core from 10.0.0.1 port 55594 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:21.939993 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:21.946209 systemd-logind[1433]: New session 10 of user core. Jun 25 18:45:21.962199 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:45:22.124689 sshd[3615]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:22.129347 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:55594.service: Deactivated successfully. Jun 25 18:45:22.132097 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:45:22.133111 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:45:22.136332 systemd-logind[1433]: Removed session 10. Jun 25 18:45:23.609511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370298178.mount: Deactivated successfully. Jun 25 18:45:24.725513 containerd[1452]: time="2024-06-25T18:45:24.725436143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:24.727287 containerd[1452]: time="2024-06-25T18:45:24.727042172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:45:24.728939 containerd[1452]: time="2024-06-25T18:45:24.728889705Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:24.732344 containerd[1452]: time="2024-06-25T18:45:24.732285288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:24.732855 containerd[1452]: time="2024-06-25T18:45:24.732819529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.43215433s" Jun 25 18:45:24.732916 containerd[1452]: time="2024-06-25T18:45:24.732856384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:45:24.744199 containerd[1452]: time="2024-06-25T18:45:24.743528104Z" level=info msg="CreateContainer within sandbox \"4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:45:24.765612 containerd[1452]: time="2024-06-25T18:45:24.765534749Z" level=info msg="CreateContainer within sandbox \"4021a668ca3056fc62fdb429a00df12f8d261dc34ea0344173dce11f047ac964\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c20229c544f099557536ff172bce7ea3f4720ada046222b142d6d44af42ce5e0\"" Jun 25 18:45:24.766233 containerd[1452]: time="2024-06-25T18:45:24.766168924Z" level=info msg="StartContainer for \"c20229c544f099557536ff172bce7ea3f4720ada046222b142d6d44af42ce5e0\"" Jun 25 18:45:24.820156 systemd[1]: Started cri-containerd-c20229c544f099557536ff172bce7ea3f4720ada046222b142d6d44af42ce5e0.scope - libcontainer container c20229c544f099557536ff172bce7ea3f4720ada046222b142d6d44af42ce5e0. Jun 25 18:45:24.855811 containerd[1452]: time="2024-06-25T18:45:24.855761099Z" level=info msg="StartContainer for \"c20229c544f099557536ff172bce7ea3f4720ada046222b142d6d44af42ce5e0\" returns successfully" Jun 25 18:45:25.002018 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:45:25.002171 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jun 25 18:45:25.326564 kubelet[2552]: E0625 18:45:25.326427 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:25.368583 kubelet[2552]: I0625 18:45:25.368508 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p2xpf" podStartSLOduration=1.9379052350000001 podStartE2EDuration="25.368491748s" podCreationTimestamp="2024-06-25 18:45:00 +0000 UTC" firstStartedPulling="2024-06-25 18:45:01.303053921 +0000 UTC m=+22.279280313" lastFinishedPulling="2024-06-25 18:45:24.733640435 +0000 UTC m=+45.709866826" observedRunningTime="2024-06-25 18:45:25.368034495 +0000 UTC m=+46.344260887" watchObservedRunningTime="2024-06-25 18:45:25.368491748 +0000 UTC m=+46.344718159" Jun 25 18:45:26.335144 kubelet[2552]: E0625 18:45:26.335098 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:27.138073 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:53378.service - OpenSSH per-connection server daemon (10.0.0.1:53378). Jun 25 18:45:27.194959 sshd[3865]: Accepted publickey for core from 10.0.0.1 port 53378 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:27.197229 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:27.204677 systemd-logind[1433]: New session 11 of user core. Jun 25 18:45:27.208132 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:45:27.259234 systemd-networkd[1387]: vxlan.calico: Link UP Jun 25 18:45:27.259243 systemd-networkd[1387]: vxlan.calico: Gained carrier Jun 25 18:45:27.374058 sshd[3865]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:27.381661 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:53378.service: Deactivated successfully. Jun 25 18:45:27.384165 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:45:27.385331 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:45:27.395333 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:53380.service - OpenSSH per-connection server daemon (10.0.0.1:53380). Jun 25 18:45:27.396250 systemd-logind[1433]: Removed session 11. Jun 25 18:45:27.428457 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 53380 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:27.430311 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:27.436059 systemd-logind[1433]: New session 12 of user core. Jun 25 18:45:27.445208 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:45:27.746353 sshd[3958]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:27.761923 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:53380.service: Deactivated successfully. Jun 25 18:45:27.763801 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:45:27.765923 systemd-logind[1433]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:45:27.771309 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:53396.service - OpenSSH per-connection server daemon (10.0.0.1:53396). Jun 25 18:45:27.772180 systemd-logind[1433]: Removed session 12. Jun 25 18:45:27.806123 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 53396 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:27.807847 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:27.812401 systemd-logind[1433]: New session 13 of user core. Jun 25 18:45:27.822060 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:45:28.113092 sshd[3972]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:28.116192 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:53396.service: Deactivated successfully. Jun 25 18:45:28.117432 containerd[1452]: time="2024-06-25T18:45:28.117287832Z" level=info msg="StopPodSandbox for \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\"" Jun 25 18:45:28.119447 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:45:28.123378 systemd-logind[1433]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:45:28.124658 systemd-logind[1433]: Removed session 13. Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.527 [INFO][4003] k8s.go 608: Cleaning up netns ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.527 [INFO][4003] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" iface="eth0" netns="/var/run/netns/cni-fbe05341-508a-54b5-93af-4f186e8c8e0c" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.527 [INFO][4003] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" iface="eth0" netns="/var/run/netns/cni-fbe05341-508a-54b5-93af-4f186e8c8e0c" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.530 [INFO][4003] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" iface="eth0" netns="/var/run/netns/cni-fbe05341-508a-54b5-93af-4f186e8c8e0c" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.530 [INFO][4003] k8s.go 615: Releasing IP address(es) ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.530 [INFO][4003] utils.go 188: Calico CNI releasing IP address ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.621 [INFO][4011] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.621 [INFO][4011] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.622 [INFO][4011] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.693 [WARNING][4011] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.693 [INFO][4011] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.696 [INFO][4011] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:28.702494 containerd[1452]: 2024-06-25 18:45:28.698 [INFO][4003] k8s.go 621: Teardown processing complete. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:28.706071 containerd[1452]: time="2024-06-25T18:45:28.706011448Z" level=info msg="TearDown network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\" successfully" Jun 25 18:45:28.706071 containerd[1452]: time="2024-06-25T18:45:28.706061249Z" level=info msg="StopPodSandbox for \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\" returns successfully" Jun 25 18:45:28.706210 systemd[1]: run-netns-cni\x2dfbe05341\x2d508a\x2d54b5\x2d93af\x2d4f186e8c8e0c.mount: Deactivated successfully. Jun 25 18:45:28.707640 containerd[1452]: time="2024-06-25T18:45:28.707531909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6478596484-t4777,Uid:c6c8846e-3e44-489e-802e-c566bf42ad50,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:29.118545 containerd[1452]: time="2024-06-25T18:45:29.118502265Z" level=info msg="StopPodSandbox for \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\"" Jun 25 18:45:29.121462 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.177 [INFO][4035] k8s.go 608: Cleaning up netns ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.177 [INFO][4035] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" iface="eth0" netns="/var/run/netns/cni-c0fb7a0b-580c-32b0-a057-af3bf07923c1" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.178 [INFO][4035] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" iface="eth0" netns="/var/run/netns/cni-c0fb7a0b-580c-32b0-a057-af3bf07923c1" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.178 [INFO][4035] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" iface="eth0" netns="/var/run/netns/cni-c0fb7a0b-580c-32b0-a057-af3bf07923c1" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.178 [INFO][4035] k8s.go 615: Releasing IP address(es) ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.178 [INFO][4035] utils.go 188: Calico CNI releasing IP address ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.207 [INFO][4057] ipam_plugin.go 411: Releasing address using handleID ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.208 [INFO][4057] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.208 [INFO][4057] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.214 [WARNING][4057] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.214 [INFO][4057] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.218 [INFO][4057] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:29.224549 containerd[1452]: 2024-06-25 18:45:29.222 [INFO][4035] k8s.go 621: Teardown processing complete. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:29.226232 containerd[1452]: time="2024-06-25T18:45:29.224805603Z" level=info msg="TearDown network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\" successfully" Jun 25 18:45:29.226232 containerd[1452]: time="2024-06-25T18:45:29.224838650Z" level=info msg="StopPodSandbox for \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\" returns successfully" Jun 25 18:45:29.226305 kubelet[2552]: E0625 18:45:29.225384 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:29.226708 containerd[1452]: time="2024-06-25T18:45:29.226563422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p6l7c,Uid:0591d1d9-4bc9-42e9-9e91-b48fd33d009e,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:29.229056 systemd[1]: run-netns-cni\x2dc0fb7a0b\x2d580c\x2d32b0\x2da057\x2daf3bf07923c1.mount: Deactivated successfully. Jun 25 18:45:29.293696 systemd-networkd[1387]: calic797a79782e: Link UP Jun 25 18:45:29.294770 systemd-networkd[1387]: calic797a79782e: Gained carrier Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.196 [INFO][4043] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0 calico-kube-controllers-6478596484- calico-system c6c8846e-3e44-489e-802e-c566bf42ad50 815 0 2024-06-25 18:45:00 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6478596484 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6478596484-t4777 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic797a79782e [] []}} ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.196 [INFO][4043] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.236 [INFO][4066] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" HandleID="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.254 [INFO][4066] ipam_plugin.go 264: Auto assigning IP ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" HandleID="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000321ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6478596484-t4777", "timestamp":"2024-06-25 18:45:29.23694737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.254 [INFO][4066] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.254 [INFO][4066] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.254 [INFO][4066] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.258 [INFO][4066] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.266 [INFO][4066] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.272 [INFO][4066] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.274 [INFO][4066] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.276 [INFO][4066] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.276 [INFO][4066] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.277 [INFO][4066] ipam.go 1685: Creating new handle: k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51 Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.281 [INFO][4066] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.286 [INFO][4066] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.286 [INFO][4066] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" host="localhost" Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.286 [INFO][4066] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:29.321172 containerd[1452]: 2024-06-25 18:45:29.286 [INFO][4066] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" HandleID="k8s-pod-network.d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:29.321717 containerd[1452]: 2024-06-25 18:45:29.289 [INFO][4043] k8s.go 386: Populated endpoint ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0", GenerateName:"calico-kube-controllers-6478596484-", Namespace:"calico-system", SelfLink:"", UID:"c6c8846e-3e44-489e-802e-c566bf42ad50", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6478596484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6478596484-t4777", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic797a79782e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.321717 containerd[1452]: 2024-06-25 18:45:29.290 [INFO][4043] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:29.321717 containerd[1452]: 2024-06-25 18:45:29.290 [INFO][4043] dataplane_linux.go 68: Setting the host side veth name to calic797a79782e ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:29.321717 containerd[1452]: 2024-06-25 18:45:29.296 [INFO][4043] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:29.321717 containerd[1452]: 2024-06-25 18:45:29.296 [INFO][4043] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0", GenerateName:"calico-kube-controllers-6478596484-", Namespace:"calico-system", SelfLink:"", UID:"c6c8846e-3e44-489e-802e-c566bf42ad50", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6478596484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51", Pod:"calico-kube-controllers-6478596484-t4777", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic797a79782e", MAC:"e2:6a:9c:a1:59:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.321717 containerd[1452]: 2024-06-25 18:45:29.314 [INFO][4043] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51" Namespace="calico-system" Pod="calico-kube-controllers-6478596484-t4777" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:29.367943 containerd[1452]: time="2024-06-25T18:45:29.367196756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:29.367943 containerd[1452]: time="2024-06-25T18:45:29.367294285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.367943 containerd[1452]: time="2024-06-25T18:45:29.367317592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:29.367943 containerd[1452]: time="2024-06-25T18:45:29.367331219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.403322 systemd[1]: Started cri-containerd-d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51.scope - libcontainer container d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51. Jun 25 18:45:29.410849 systemd-networkd[1387]: cali1bd1dd79790: Link UP Jun 25 18:45:29.412925 systemd-networkd[1387]: cali1bd1dd79790: Gained carrier Jun 25 18:45:29.430488 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.290 [INFO][4075] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0 coredns-7db6d8ff4d- kube-system 0591d1d9-4bc9-42e9-9e91-b48fd33d009e 820 0 2024-06-25 18:44:54 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-p6l7c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1bd1dd79790 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.290 [INFO][4075] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.334 [INFO][4092] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" HandleID="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.343 [INFO][4092] ipam_plugin.go 264: Auto assigning IP ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" HandleID="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-p6l7c", "timestamp":"2024-06-25 18:45:29.33466378 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.343 [INFO][4092] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.343 [INFO][4092] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.343 [INFO][4092] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.345 [INFO][4092] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.348 [INFO][4092] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.370 [INFO][4092] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.374 [INFO][4092] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.378 [INFO][4092] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.378 [INFO][4092] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.382 [INFO][4092] ipam.go 1685: Creating new handle: k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.388 [INFO][4092] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.396 [INFO][4092] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.396 [INFO][4092] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" host="localhost" Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.396 [INFO][4092] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:29.437643 containerd[1452]: 2024-06-25 18:45:29.396 [INFO][4092] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" HandleID="k8s-pod-network.d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.438587 containerd[1452]: 2024-06-25 18:45:29.402 [INFO][4075] k8s.go 386: Populated endpoint ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0591d1d9-4bc9-42e9-9e91-b48fd33d009e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-p6l7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bd1dd79790", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.438587 containerd[1452]: 2024-06-25 18:45:29.402 [INFO][4075] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.438587 containerd[1452]: 2024-06-25 18:45:29.402 [INFO][4075] dataplane_linux.go 68: Setting the host side veth name to cali1bd1dd79790 ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.438587 containerd[1452]: 2024-06-25 18:45:29.414 [INFO][4075] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.438587 containerd[1452]: 2024-06-25 18:45:29.415 [INFO][4075] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0591d1d9-4bc9-42e9-9e91-b48fd33d009e", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f", Pod:"coredns-7db6d8ff4d-p6l7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bd1dd79790", MAC:"5e:df:a9:f5:5a:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:29.438587 containerd[1452]: 2024-06-25 18:45:29.433 [INFO][4075] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-p6l7c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:29.461233 containerd[1452]: time="2024-06-25T18:45:29.461178573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6478596484-t4777,Uid:c6c8846e-3e44-489e-802e-c566bf42ad50,Namespace:calico-system,Attempt:1,} returns sandbox id \"d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51\"" Jun 25 18:45:29.463140 containerd[1452]: time="2024-06-25T18:45:29.463110425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:45:29.503812 containerd[1452]: time="2024-06-25T18:45:29.503557344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:29.504067 containerd[1452]: time="2024-06-25T18:45:29.503686276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.504067 containerd[1452]: time="2024-06-25T18:45:29.503824426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:29.504067 containerd[1452]: time="2024-06-25T18:45:29.503848505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:29.525073 systemd[1]: Started cri-containerd-d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f.scope - libcontainer container d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f. Jun 25 18:45:29.538816 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:45:29.564859 containerd[1452]: time="2024-06-25T18:45:29.564795345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p6l7c,Uid:0591d1d9-4bc9-42e9-9e91-b48fd33d009e,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f\"" Jun 25 18:45:29.565787 kubelet[2552]: E0625 18:45:29.565746 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:29.567650 containerd[1452]: time="2024-06-25T18:45:29.567619569Z" level=info msg="CreateContainer within sandbox \"d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:30.077771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547690936.mount: Deactivated successfully. Jun 25 18:45:30.080803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407521215.mount: Deactivated successfully. Jun 25 18:45:30.450951 containerd[1452]: time="2024-06-25T18:45:30.450784699Z" level=info msg="CreateContainer within sandbox \"d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9836dbeb6a950f93fc93e0eb084ac4315bdbc15b6381bc83f5d92d3cec5a3957\"" Jun 25 18:45:30.459143 containerd[1452]: time="2024-06-25T18:45:30.459085431Z" level=info msg="StartContainer for \"9836dbeb6a950f93fc93e0eb084ac4315bdbc15b6381bc83f5d92d3cec5a3957\"" Jun 25 18:45:30.491427 systemd[1]: Started cri-containerd-9836dbeb6a950f93fc93e0eb084ac4315bdbc15b6381bc83f5d92d3cec5a3957.scope - libcontainer container 9836dbeb6a950f93fc93e0eb084ac4315bdbc15b6381bc83f5d92d3cec5a3957. Jun 25 18:45:30.542217 containerd[1452]: time="2024-06-25T18:45:30.542167430Z" level=info msg="StartContainer for \"9836dbeb6a950f93fc93e0eb084ac4315bdbc15b6381bc83f5d92d3cec5a3957\" returns successfully" Jun 25 18:45:30.849116 systemd-networkd[1387]: calic797a79782e: Gained IPv6LL Jun 25 18:45:30.849671 systemd-networkd[1387]: cali1bd1dd79790: Gained IPv6LL Jun 25 18:45:31.118508 containerd[1452]: time="2024-06-25T18:45:31.118338363Z" level=info msg="StopPodSandbox for \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\"" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.194 [INFO][4261] k8s.go 608: Cleaning up netns ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.194 [INFO][4261] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" iface="eth0" netns="/var/run/netns/cni-87ef18c2-1734-8a1f-38d0-5a0203fe08d3" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.195 [INFO][4261] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" iface="eth0" netns="/var/run/netns/cni-87ef18c2-1734-8a1f-38d0-5a0203fe08d3" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.195 [INFO][4261] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" iface="eth0" netns="/var/run/netns/cni-87ef18c2-1734-8a1f-38d0-5a0203fe08d3" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.195 [INFO][4261] k8s.go 615: Releasing IP address(es) ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.195 [INFO][4261] utils.go 188: Calico CNI releasing IP address ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.223 [INFO][4269] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.224 [INFO][4269] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.224 [INFO][4269] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.231 [WARNING][4269] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.231 [INFO][4269] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.234 [INFO][4269] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:31.240366 containerd[1452]: 2024-06-25 18:45:31.236 [INFO][4261] k8s.go 621: Teardown processing complete. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:31.242294 containerd[1452]: time="2024-06-25T18:45:31.241758036Z" level=info msg="TearDown network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\" successfully" Jun 25 18:45:31.242294 containerd[1452]: time="2024-06-25T18:45:31.241796905Z" level=info msg="StopPodSandbox for \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\" returns successfully" Jun 25 18:45:31.242420 kubelet[2552]: E0625 18:45:31.242373 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:31.243529 containerd[1452]: time="2024-06-25T18:45:31.243309400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r6cll,Uid:5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:31.247125 systemd[1]: run-netns-cni\x2d87ef18c2\x2d1734\x2d8a1f\x2d38d0\x2d5a0203fe08d3.mount: Deactivated successfully. Jun 25 18:45:31.353552 kubelet[2552]: E0625 18:45:31.353502 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:31.420353 kubelet[2552]: I0625 18:45:31.419801 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p6l7c" podStartSLOduration=37.419776583 podStartE2EDuration="37.419776583s" podCreationTimestamp="2024-06-25 18:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:31.386462407 +0000 UTC m=+52.362688818" watchObservedRunningTime="2024-06-25 18:45:31.419776583 +0000 UTC m=+52.396002984" Jun 25 18:45:31.520340 systemd-networkd[1387]: cali458580e1041: Link UP Jun 25 18:45:31.521381 systemd-networkd[1387]: cali458580e1041: Gained carrier Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.327 [INFO][4277] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0 coredns-7db6d8ff4d- kube-system 5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f 843 0 2024-06-25 18:44:54 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-r6cll eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali458580e1041 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.328 [INFO][4277] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.422 [INFO][4292] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" HandleID="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.448 [INFO][4292] ipam_plugin.go 264: Auto assigning IP ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" HandleID="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060a750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-r6cll", "timestamp":"2024-06-25 18:45:31.422211089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.448 [INFO][4292] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.448 [INFO][4292] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.448 [INFO][4292] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.453 [INFO][4292] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.461 [INFO][4292] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.469 [INFO][4292] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.473 [INFO][4292] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.477 [INFO][4292] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.477 [INFO][4292] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.485 [INFO][4292] ipam.go 1685: Creating new handle: k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.491 [INFO][4292] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.514 [INFO][4292] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.514 [INFO][4292] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" host="localhost" Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.514 [INFO][4292] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:31.580548 containerd[1452]: 2024-06-25 18:45:31.514 [INFO][4292] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" HandleID="k8s-pod-network.58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.581583 containerd[1452]: 2024-06-25 18:45:31.517 [INFO][4277] k8s.go 386: Populated endpoint ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-r6cll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali458580e1041", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:31.581583 containerd[1452]: 2024-06-25 18:45:31.517 [INFO][4277] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.581583 containerd[1452]: 2024-06-25 18:45:31.518 [INFO][4277] dataplane_linux.go 68: Setting the host side veth name to cali458580e1041 ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.581583 containerd[1452]: 2024-06-25 18:45:31.520 [INFO][4277] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.581583 containerd[1452]: 2024-06-25 18:45:31.521 [INFO][4277] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d", Pod:"coredns-7db6d8ff4d-r6cll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali458580e1041", MAC:"c2:25:d7:d0:97:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:31.581583 containerd[1452]: 2024-06-25 18:45:31.576 [INFO][4277] k8s.go 500: Wrote updated endpoint to datastore ContainerID="58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-r6cll" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:31.612309 containerd[1452]: time="2024-06-25T18:45:31.611919634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:31.612309 containerd[1452]: time="2024-06-25T18:45:31.612012652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:31.612309 containerd[1452]: time="2024-06-25T18:45:31.612037623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:31.612309 containerd[1452]: time="2024-06-25T18:45:31.612054888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:31.644608 systemd[1]: Started cri-containerd-58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d.scope - libcontainer container 58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d. Jun 25 18:45:31.665856 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:45:31.693910 containerd[1452]: time="2024-06-25T18:45:31.693746877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r6cll,Uid:5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f,Namespace:kube-system,Attempt:1,} returns sandbox id \"58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d\"" Jun 25 18:45:31.694976 kubelet[2552]: E0625 18:45:31.694924 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:31.697218 containerd[1452]: time="2024-06-25T18:45:31.697177924Z" level=info msg="CreateContainer within sandbox \"58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:31.847148 containerd[1452]: time="2024-06-25T18:45:31.846999719Z" level=info msg="CreateContainer within sandbox \"58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c2fec9cf03d6511ffa05e7fc1c9b1cbc84ad190821f9b83c053e46ffb734ebb\"" Jun 25 18:45:31.849499 containerd[1452]: time="2024-06-25T18:45:31.849143455Z" level=info msg="StartContainer for \"3c2fec9cf03d6511ffa05e7fc1c9b1cbc84ad190821f9b83c053e46ffb734ebb\"" Jun 25 18:45:31.910259 systemd[1]: Started cri-containerd-3c2fec9cf03d6511ffa05e7fc1c9b1cbc84ad190821f9b83c053e46ffb734ebb.scope - libcontainer container 3c2fec9cf03d6511ffa05e7fc1c9b1cbc84ad190821f9b83c053e46ffb734ebb. Jun 25 18:45:31.952085 containerd[1452]: time="2024-06-25T18:45:31.951905910Z" level=info msg="StartContainer for \"3c2fec9cf03d6511ffa05e7fc1c9b1cbc84ad190821f9b83c053e46ffb734ebb\" returns successfully" Jun 25 18:45:32.359167 kubelet[2552]: E0625 18:45:32.359014 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:32.359847 kubelet[2552]: E0625 18:45:32.359229 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:32.577076 systemd-networkd[1387]: cali458580e1041: Gained IPv6LL Jun 25 18:45:32.683988 kubelet[2552]: I0625 18:45:32.683678 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r6cll" podStartSLOduration=38.683655094 podStartE2EDuration="38.683655094s" podCreationTimestamp="2024-06-25 18:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:32.371993027 +0000 UTC m=+53.348219428" watchObservedRunningTime="2024-06-25 18:45:32.683655094 +0000 UTC m=+53.659881485" Jun 25 18:45:32.926142 containerd[1452]: time="2024-06-25T18:45:32.926066844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:32.927747 containerd[1452]: time="2024-06-25T18:45:32.927693410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:45:32.929524 containerd[1452]: time="2024-06-25T18:45:32.929460408Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:32.933079 containerd[1452]: time="2024-06-25T18:45:32.932995720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:32.934453 containerd[1452]: time="2024-06-25T18:45:32.934001999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.470851662s" Jun 25 18:45:32.934453 containerd[1452]: time="2024-06-25T18:45:32.934061229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:45:32.951041 containerd[1452]: time="2024-06-25T18:45:32.950961262Z" level=info msg="CreateContainer within sandbox \"d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:45:32.983555 containerd[1452]: time="2024-06-25T18:45:32.983491466Z" level=info msg="CreateContainer within sandbox \"d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"74380fb47d9b2c8c92372747b6a8f2eb8fb88eba342f2c7eff2242f1e9b799e5\"" Jun 25 18:45:32.986263 containerd[1452]: time="2024-06-25T18:45:32.986185223Z" level=info msg="StartContainer for \"74380fb47d9b2c8c92372747b6a8f2eb8fb88eba342f2c7eff2242f1e9b799e5\"" Jun 25 18:45:33.026130 systemd[1]: Started cri-containerd-74380fb47d9b2c8c92372747b6a8f2eb8fb88eba342f2c7eff2242f1e9b799e5.scope - libcontainer container 74380fb47d9b2c8c92372747b6a8f2eb8fb88eba342f2c7eff2242f1e9b799e5. Jun 25 18:45:33.119982 containerd[1452]: time="2024-06-25T18:45:33.119542009Z" level=info msg="StopPodSandbox for \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\"" Jun 25 18:45:33.120396 containerd[1452]: time="2024-06-25T18:45:33.120370416Z" level=info msg="StartContainer for \"74380fb47d9b2c8c92372747b6a8f2eb8fb88eba342f2c7eff2242f1e9b799e5\" returns successfully" Jun 25 18:45:33.138422 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:53406.service - OpenSSH per-connection server daemon (10.0.0.1:53406). Jun 25 18:45:33.183203 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 53406 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:33.185145 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:33.190398 systemd-logind[1433]: New session 14 of user core. Jun 25 18:45:33.203203 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.292 [INFO][4467] k8s.go 608: Cleaning up netns ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.292 [INFO][4467] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" iface="eth0" netns="/var/run/netns/cni-91eea4b2-c096-8067-3f59-cbaa93b7900a" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.293 [INFO][4467] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" iface="eth0" netns="/var/run/netns/cni-91eea4b2-c096-8067-3f59-cbaa93b7900a" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.293 [INFO][4467] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" iface="eth0" netns="/var/run/netns/cni-91eea4b2-c096-8067-3f59-cbaa93b7900a" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.293 [INFO][4467] k8s.go 615: Releasing IP address(es) ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.293 [INFO][4467] utils.go 188: Calico CNI releasing IP address ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.337 [INFO][4485] ipam_plugin.go 411: Releasing address using handleID ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.338 [INFO][4485] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.338 [INFO][4485] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.344 [WARNING][4485] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.344 [INFO][4485] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.346 [INFO][4485] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:33.352961 containerd[1452]: 2024-06-25 18:45:33.350 [INFO][4467] k8s.go 621: Teardown processing complete. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:33.353437 containerd[1452]: time="2024-06-25T18:45:33.353134706Z" level=info msg="TearDown network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\" successfully" Jun 25 18:45:33.353437 containerd[1452]: time="2024-06-25T18:45:33.353169636Z" level=info msg="StopPodSandbox for \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\" returns successfully" Jun 25 18:45:33.354019 containerd[1452]: time="2024-06-25T18:45:33.353992682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fnjft,Uid:3ec7b6d3-70f6-438a-9640-5d7339271cb9,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:33.363167 kubelet[2552]: E0625 18:45:33.363117 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:33.364569 kubelet[2552]: E0625 18:45:33.364540 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:33.447148 kubelet[2552]: I0625 18:45:33.444628 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6478596484-t4777" podStartSLOduration=29.972275807 podStartE2EDuration="33.444606708s" podCreationTimestamp="2024-06-25 18:45:00 +0000 UTC" firstStartedPulling="2024-06-25 18:45:29.462798411 +0000 UTC m=+50.439024802" lastFinishedPulling="2024-06-25 18:45:32.935129312 +0000 UTC m=+53.911355703" observedRunningTime="2024-06-25 18:45:33.444297853 +0000 UTC m=+54.420524244" watchObservedRunningTime="2024-06-25 18:45:33.444606708 +0000 UTC m=+54.420833109" Jun 25 18:45:33.461475 sshd[4450]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:33.471323 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:53406.service: Deactivated successfully. Jun 25 18:45:33.473635 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:45:33.486431 systemd-logind[1433]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:45:33.491939 systemd-logind[1433]: Removed session 14. Jun 25 18:45:33.618224 systemd-networkd[1387]: calibfa9e635a12: Link UP Jun 25 18:45:33.619195 systemd-networkd[1387]: calibfa9e635a12: Gained carrier Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.532 [INFO][4520] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fnjft-eth0 csi-node-driver- calico-system 3ec7b6d3-70f6-438a-9640-5d7339271cb9 881 0 2024-06-25 18:45:00 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-fnjft eth0 default [] [] [kns.calico-system ksa.calico-system.default] calibfa9e635a12 [] []}} ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.532 [INFO][4520] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.569 [INFO][4531] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" HandleID="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.578 [INFO][4531] ipam_plugin.go 264: Auto assigning IP ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" HandleID="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ce30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fnjft", "timestamp":"2024-06-25 18:45:33.569233601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.578 [INFO][4531] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.578 [INFO][4531] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.578 [INFO][4531] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.580 [INFO][4531] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.585 [INFO][4531] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.590 [INFO][4531] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.592 [INFO][4531] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.595 [INFO][4531] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.595 [INFO][4531] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.596 [INFO][4531] ipam.go 1685: Creating new handle: k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4 Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.599 [INFO][4531] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.612 [INFO][4531] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.612 [INFO][4531] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" host="localhost" Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.612 [INFO][4531] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:33.791484 containerd[1452]: 2024-06-25 18:45:33.612 [INFO][4531] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" HandleID="k8s-pod-network.9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.793368 containerd[1452]: 2024-06-25 18:45:33.616 [INFO][4520] k8s.go 386: Populated endpoint ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fnjft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ec7b6d3-70f6-438a-9640-5d7339271cb9", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fnjft", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibfa9e635a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:33.793368 containerd[1452]: 2024-06-25 18:45:33.616 [INFO][4520] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.793368 containerd[1452]: 2024-06-25 18:45:33.616 [INFO][4520] dataplane_linux.go 68: Setting the host side veth name to calibfa9e635a12 ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.793368 containerd[1452]: 2024-06-25 18:45:33.617 [INFO][4520] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.793368 containerd[1452]: 2024-06-25 18:45:33.619 [INFO][4520] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fnjft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ec7b6d3-70f6-438a-9640-5d7339271cb9", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4", Pod:"csi-node-driver-fnjft", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibfa9e635a12", MAC:"6e:0c:65:99:e2:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:33.793368 containerd[1452]: 2024-06-25 18:45:33.786 [INFO][4520] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4" Namespace="calico-system" Pod="csi-node-driver-fnjft" WorkloadEndpoint="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:33.818949 containerd[1452]: time="2024-06-25T18:45:33.818361313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:33.818949 containerd[1452]: time="2024-06-25T18:45:33.818476726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:33.818949 containerd[1452]: time="2024-06-25T18:45:33.818509323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:33.818949 containerd[1452]: time="2024-06-25T18:45:33.818532840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:33.845176 systemd[1]: Started cri-containerd-9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4.scope - libcontainer container 9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4. Jun 25 18:45:33.863955 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:45:33.877538 containerd[1452]: time="2024-06-25T18:45:33.877497463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fnjft,Uid:3ec7b6d3-70f6-438a-9640-5d7339271cb9,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4\"" Jun 25 18:45:33.880157 containerd[1452]: time="2024-06-25T18:45:33.880124591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:45:33.945910 systemd[1]: run-netns-cni\x2d91eea4b2\x2dc096\x2d8067\x2d3f59\x2dcbaa93b7900a.mount: Deactivated successfully. Jun 25 18:45:34.367352 kubelet[2552]: E0625 18:45:34.367311 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:45:35.010351 systemd-networkd[1387]: calibfa9e635a12: Gained IPv6LL Jun 25 18:45:37.421993 containerd[1452]: time="2024-06-25T18:45:37.421918786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:37.422853 containerd[1452]: time="2024-06-25T18:45:37.422786468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:45:37.424819 containerd[1452]: time="2024-06-25T18:45:37.424773239Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:37.427593 containerd[1452]: time="2024-06-25T18:45:37.427533793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:37.428400 containerd[1452]: time="2024-06-25T18:45:37.428378889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 3.54821574s" Jun 25 18:45:37.428455 containerd[1452]: time="2024-06-25T18:45:37.428405954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:45:37.434359 containerd[1452]: time="2024-06-25T18:45:37.434292720Z" level=info msg="CreateContainer within sandbox \"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:45:37.469460 containerd[1452]: time="2024-06-25T18:45:37.469399822Z" level=info msg="CreateContainer within sandbox \"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65e8f7953a3fd3e7ed59be2d3c09d996a8d50e129d12f79c3729b86a8967d398\"" Jun 25 18:45:37.470465 containerd[1452]: time="2024-06-25T18:45:37.469984462Z" level=info msg="StartContainer for \"65e8f7953a3fd3e7ed59be2d3c09d996a8d50e129d12f79c3729b86a8967d398\"" Jun 25 18:45:37.506845 systemd[1]: Started cri-containerd-65e8f7953a3fd3e7ed59be2d3c09d996a8d50e129d12f79c3729b86a8967d398.scope - libcontainer container 65e8f7953a3fd3e7ed59be2d3c09d996a8d50e129d12f79c3729b86a8967d398. Jun 25 18:45:37.543985 containerd[1452]: time="2024-06-25T18:45:37.543927435Z" level=info msg="StartContainer for \"65e8f7953a3fd3e7ed59be2d3c09d996a8d50e129d12f79c3729b86a8967d398\" returns successfully" Jun 25 18:45:37.545745 containerd[1452]: time="2024-06-25T18:45:37.545698010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:45:38.476367 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Jun 25 18:45:38.615700 sshd[4672]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:38.617610 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:38.622226 systemd-logind[1433]: New session 15 of user core. Jun 25 18:45:38.632078 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:45:38.799653 sshd[4672]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:38.804313 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:58002.service: Deactivated successfully. Jun 25 18:45:38.807498 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:45:38.808341 systemd-logind[1433]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:45:38.809656 systemd-logind[1433]: Removed session 15. Jun 25 18:45:39.108823 containerd[1452]: time="2024-06-25T18:45:39.108699387Z" level=info msg="StopPodSandbox for \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\"" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.146 [WARNING][4702] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0591d1d9-4bc9-42e9-9e91-b48fd33d009e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f", Pod:"coredns-7db6d8ff4d-p6l7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bd1dd79790", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.146 [INFO][4702] k8s.go 608: Cleaning up netns ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.146 [INFO][4702] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" iface="eth0" netns="" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.146 [INFO][4702] k8s.go 615: Releasing IP address(es) ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.146 [INFO][4702] utils.go 188: Calico CNI releasing IP address ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.171 [INFO][4712] ipam_plugin.go 411: Releasing address using handleID ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.171 [INFO][4712] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.171 [INFO][4712] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.176 [WARNING][4712] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.176 [INFO][4712] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.178 [INFO][4712] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.184065 containerd[1452]: 2024-06-25 18:45:39.181 [INFO][4702] k8s.go 621: Teardown processing complete. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.184510 containerd[1452]: time="2024-06-25T18:45:39.184126766Z" level=info msg="TearDown network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\" successfully" Jun 25 18:45:39.184510 containerd[1452]: time="2024-06-25T18:45:39.184158874Z" level=info msg="StopPodSandbox for \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\" returns successfully" Jun 25 18:45:39.191784 containerd[1452]: time="2024-06-25T18:45:39.191745607Z" level=info msg="RemovePodSandbox for \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\"" Jun 25 18:45:39.194444 containerd[1452]: time="2024-06-25T18:45:39.194403168Z" level=info msg="Forcibly stopping sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\"" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.235 [WARNING][4735] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0591d1d9-4bc9-42e9-9e91-b48fd33d009e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5ddc77ee094c7f98c0cb9182151363e54f15db5925b7273e9d791cc196a879f", Pod:"coredns-7db6d8ff4d-p6l7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bd1dd79790", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.235 [INFO][4735] k8s.go 608: Cleaning up netns ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.235 [INFO][4735] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" iface="eth0" netns="" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.235 [INFO][4735] k8s.go 615: Releasing IP address(es) ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.235 [INFO][4735] utils.go 188: Calico CNI releasing IP address ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.257 [INFO][4742] ipam_plugin.go 411: Releasing address using handleID ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.257 [INFO][4742] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.257 [INFO][4742] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.264 [WARNING][4742] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.264 [INFO][4742] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" HandleID="k8s-pod-network.ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Workload="localhost-k8s-coredns--7db6d8ff4d--p6l7c-eth0" Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.266 [INFO][4742] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.272159 containerd[1452]: 2024-06-25 18:45:39.269 [INFO][4735] k8s.go 621: Teardown processing complete. ContainerID="ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075" Jun 25 18:45:39.272610 containerd[1452]: time="2024-06-25T18:45:39.272213198Z" level=info msg="TearDown network for sandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\" successfully" Jun 25 18:45:39.290012 containerd[1452]: time="2024-06-25T18:45:39.289930685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:39.290189 containerd[1452]: time="2024-06-25T18:45:39.290036673Z" level=info msg="RemovePodSandbox \"ee6e043f4096b02de79578afd7ae0557ea591d636019021ad2149d857def0075\" returns successfully" Jun 25 18:45:39.290667 containerd[1452]: time="2024-06-25T18:45:39.290619461Z" level=info msg="StopPodSandbox for \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\"" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.339 [WARNING][4765] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d", Pod:"coredns-7db6d8ff4d-r6cll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali458580e1041", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.339 [INFO][4765] k8s.go 608: Cleaning up netns ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.340 [INFO][4765] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" iface="eth0" netns="" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.340 [INFO][4765] k8s.go 615: Releasing IP address(es) ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.340 [INFO][4765] utils.go 188: Calico CNI releasing IP address ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.365 [INFO][4773] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.365 [INFO][4773] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.367 [INFO][4773] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.375 [WARNING][4773] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.375 [INFO][4773] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.377 [INFO][4773] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.384719 containerd[1452]: 2024-06-25 18:45:39.380 [INFO][4765] k8s.go 621: Teardown processing complete. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.384719 containerd[1452]: time="2024-06-25T18:45:39.384674817Z" level=info msg="TearDown network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\" successfully" Jun 25 18:45:39.384719 containerd[1452]: time="2024-06-25T18:45:39.384700623Z" level=info msg="StopPodSandbox for \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\" returns successfully" Jun 25 18:45:39.386072 containerd[1452]: time="2024-06-25T18:45:39.385287808Z" level=info msg="RemovePodSandbox for \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\"" Jun 25 18:45:39.386072 containerd[1452]: time="2024-06-25T18:45:39.385326847Z" level=info msg="Forcibly stopping sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\"" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.424 [WARNING][4798] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5885ae5f-9db4-44de-9ed1-fac4dd7f4b3f", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 54, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58d275ffccac35711ebcd910de9dd5ecc496bc69246ba9abaf8075dd7d0ce58d", Pod:"coredns-7db6d8ff4d-r6cll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali458580e1041", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.425 [INFO][4798] k8s.go 608: Cleaning up netns ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.425 [INFO][4798] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" iface="eth0" netns="" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.425 [INFO][4798] k8s.go 615: Releasing IP address(es) ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.425 [INFO][4798] utils.go 188: Calico CNI releasing IP address ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.453 [INFO][4806] ipam_plugin.go 411: Releasing address using handleID ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.453 [INFO][4806] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.454 [INFO][4806] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.460 [WARNING][4806] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.460 [INFO][4806] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" HandleID="k8s-pod-network.1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Workload="localhost-k8s-coredns--7db6d8ff4d--r6cll-eth0" Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.462 [INFO][4806] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.468188 containerd[1452]: 2024-06-25 18:45:39.465 [INFO][4798] k8s.go 621: Teardown processing complete. ContainerID="1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf" Jun 25 18:45:39.468661 containerd[1452]: time="2024-06-25T18:45:39.468209647Z" level=info msg="TearDown network for sandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\" successfully" Jun 25 18:45:39.558295 containerd[1452]: time="2024-06-25T18:45:39.558070468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:39.558295 containerd[1452]: time="2024-06-25T18:45:39.558183359Z" level=info msg="RemovePodSandbox \"1ff02e27e864c09bd9e1c110c2889f7abecfaa7972ef7980580f7f8ceecbb6cf\" returns successfully" Jun 25 18:45:39.558755 containerd[1452]: time="2024-06-25T18:45:39.558733909Z" level=info msg="StopPodSandbox for \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\"" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.603 [WARNING][4833] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0", GenerateName:"calico-kube-controllers-6478596484-", Namespace:"calico-system", SelfLink:"", UID:"c6c8846e-3e44-489e-802e-c566bf42ad50", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6478596484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51", Pod:"calico-kube-controllers-6478596484-t4777", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic797a79782e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.603 [INFO][4833] k8s.go 608: Cleaning up netns ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.603 [INFO][4833] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" iface="eth0" netns="" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.603 [INFO][4833] k8s.go 615: Releasing IP address(es) ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.603 [INFO][4833] utils.go 188: Calico CNI releasing IP address ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.630 [INFO][4842] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.630 [INFO][4842] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.630 [INFO][4842] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.637 [WARNING][4842] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.637 [INFO][4842] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.639 [INFO][4842] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.645617 containerd[1452]: 2024-06-25 18:45:39.642 [INFO][4833] k8s.go 621: Teardown processing complete. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.645617 containerd[1452]: time="2024-06-25T18:45:39.645363417Z" level=info msg="TearDown network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\" successfully" Jun 25 18:45:39.645617 containerd[1452]: time="2024-06-25T18:45:39.645394262Z" level=info msg="StopPodSandbox for \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\" returns successfully" Jun 25 18:45:39.646273 containerd[1452]: time="2024-06-25T18:45:39.645936838Z" level=info msg="RemovePodSandbox for \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\"" Jun 25 18:45:39.646273 containerd[1452]: time="2024-06-25T18:45:39.645977401Z" level=info msg="Forcibly stopping sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\"" Jun 25 18:45:39.693536 containerd[1452]: time="2024-06-25T18:45:39.693452665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:39.695399 containerd[1452]: time="2024-06-25T18:45:39.695179760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:45:39.696839 containerd[1452]: time="2024-06-25T18:45:39.696803379Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:39.699493 containerd[1452]: time="2024-06-25T18:45:39.699459918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:39.701082 containerd[1452]: time="2024-06-25T18:45:39.701051952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.155314855s" Jun 25 18:45:39.701167 containerd[1452]: time="2024-06-25T18:45:39.701149405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:45:39.703368 containerd[1452]: time="2024-06-25T18:45:39.703347588Z" level=info msg="CreateContainer within sandbox \"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.689 [WARNING][4865] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0", GenerateName:"calico-kube-controllers-6478596484-", Namespace:"calico-system", SelfLink:"", UID:"c6c8846e-3e44-489e-802e-c566bf42ad50", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6478596484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d98ae36448c9695c2f25edebed9bec6dec9fe7a174443824de5a0d65d6a7ac51", Pod:"calico-kube-controllers-6478596484-t4777", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic797a79782e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.689 [INFO][4865] k8s.go 608: Cleaning up netns ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.689 [INFO][4865] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" iface="eth0" netns="" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.689 [INFO][4865] k8s.go 615: Releasing IP address(es) ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.689 [INFO][4865] utils.go 188: Calico CNI releasing IP address ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.715 [INFO][4873] ipam_plugin.go 411: Releasing address using handleID ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.716 [INFO][4873] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.716 [INFO][4873] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.721 [WARNING][4873] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.721 [INFO][4873] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" HandleID="k8s-pod-network.e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Workload="localhost-k8s-calico--kube--controllers--6478596484--t4777-eth0" Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.723 [INFO][4873] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.727671 containerd[1452]: 2024-06-25 18:45:39.725 [INFO][4865] k8s.go 621: Teardown processing complete. ContainerID="e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345" Jun 25 18:45:39.728109 containerd[1452]: time="2024-06-25T18:45:39.727716755Z" level=info msg="TearDown network for sandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\" successfully" Jun 25 18:45:39.733757 containerd[1452]: time="2024-06-25T18:45:39.732888460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:39.733757 containerd[1452]: time="2024-06-25T18:45:39.732950811Z" level=info msg="RemovePodSandbox \"e4b43a09e7bac3489a2eb9698aa000238753d31228eff9e9a977bd61b3777345\" returns successfully" Jun 25 18:45:39.733757 containerd[1452]: time="2024-06-25T18:45:39.733395913Z" level=info msg="StopPodSandbox for \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\"" Jun 25 18:45:39.739758 containerd[1452]: time="2024-06-25T18:45:39.739714140Z" level=info msg="CreateContainer within sandbox \"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e97b7e2eafd8d405d77fe4dab85919c872274c99fcb1eb38408d4bfe2a79418c\"" Jun 25 18:45:39.741938 containerd[1452]: time="2024-06-25T18:45:39.741321741Z" level=info msg="StartContainer for \"e97b7e2eafd8d405d77fe4dab85919c872274c99fcb1eb38408d4bfe2a79418c\"" Jun 25 18:45:39.786126 systemd[1]: Started cri-containerd-e97b7e2eafd8d405d77fe4dab85919c872274c99fcb1eb38408d4bfe2a79418c.scope - libcontainer container e97b7e2eafd8d405d77fe4dab85919c872274c99fcb1eb38408d4bfe2a79418c. Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.774 [WARNING][4895] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fnjft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ec7b6d3-70f6-438a-9640-5d7339271cb9", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4", Pod:"csi-node-driver-fnjft", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibfa9e635a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.775 [INFO][4895] k8s.go 608: Cleaning up netns ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.775 [INFO][4895] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" iface="eth0" netns="" Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.775 [INFO][4895] k8s.go 615: Releasing IP address(es) ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.775 [INFO][4895] utils.go 188: Calico CNI releasing IP address ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.799 [INFO][4920] ipam_plugin.go 411: Releasing address using handleID ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.800 [INFO][4920] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.800 [INFO][4920] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.806 [WARNING][4920] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.806 [INFO][4920] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.808 [INFO][4920] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.815276 containerd[1452]: 2024-06-25 18:45:39.811 [INFO][4895] k8s.go 621: Teardown processing complete. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.816006 containerd[1452]: time="2024-06-25T18:45:39.815319515Z" level=info msg="TearDown network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\" successfully" Jun 25 18:45:39.816006 containerd[1452]: time="2024-06-25T18:45:39.815342527Z" level=info msg="StopPodSandbox for \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\" returns successfully" Jun 25 18:45:39.816158 containerd[1452]: time="2024-06-25T18:45:39.816119489Z" level=info msg="RemovePodSandbox for \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\"" Jun 25 18:45:39.816197 containerd[1452]: time="2024-06-25T18:45:39.816161073Z" level=info msg="Forcibly stopping sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\"" Jun 25 18:45:39.828344 containerd[1452]: time="2024-06-25T18:45:39.828269166Z" level=info msg="StartContainer for \"e97b7e2eafd8d405d77fe4dab85919c872274c99fcb1eb38408d4bfe2a79418c\" returns successfully" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.870 [WARNING][4961] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fnjft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ec7b6d3-70f6-438a-9640-5d7339271cb9", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ff4267b4a0b28741e13d6bc711a3d878690164f3056bd3c990d19b05ee921d4", Pod:"csi-node-driver-fnjft", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibfa9e635a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.871 [INFO][4961] k8s.go 608: Cleaning up netns ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.871 [INFO][4961] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" iface="eth0" netns="" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.871 [INFO][4961] k8s.go 615: Releasing IP address(es) ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.871 [INFO][4961] utils.go 188: Calico CNI releasing IP address ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.907 [INFO][4973] ipam_plugin.go 411: Releasing address using handleID ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.907 [INFO][4973] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.907 [INFO][4973] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.915 [WARNING][4973] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.915 [INFO][4973] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" HandleID="k8s-pod-network.d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Workload="localhost-k8s-csi--node--driver--fnjft-eth0" Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.917 [INFO][4973] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.923319 containerd[1452]: 2024-06-25 18:45:39.920 [INFO][4961] k8s.go 621: Teardown processing complete. ContainerID="d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e" Jun 25 18:45:39.923319 containerd[1452]: time="2024-06-25T18:45:39.923277379Z" level=info msg="TearDown network for sandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\" successfully" Jun 25 18:45:40.043619 containerd[1452]: time="2024-06-25T18:45:40.043554373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:40.043788 containerd[1452]: time="2024-06-25T18:45:40.043649713Z" level=info msg="RemovePodSandbox \"d999dc19572b0ea907c0e57149731b26943aa8f31fa65f037997a8ef655c400e\" returns successfully" Jun 25 18:45:40.228115 kubelet[2552]: I0625 18:45:40.227992 2552 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:45:40.228115 kubelet[2552]: I0625 18:45:40.228033 2552 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:45:43.811390 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:58016.service - OpenSSH per-connection server daemon (10.0.0.1:58016). Jun 25 18:45:43.853265 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 58016 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:43.855214 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:43.859638 systemd-logind[1433]: New session 16 of user core. Jun 25 18:45:43.867083 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:45:44.008003 sshd[4982]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:44.012503 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:58016.service: Deactivated successfully. Jun 25 18:45:44.015225 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:45:44.016108 systemd-logind[1433]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:45:44.018893 systemd-logind[1433]: Removed session 16. Jun 25 18:45:49.024404 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:44846.service - OpenSSH per-connection server daemon (10.0.0.1:44846). Jun 25 18:45:49.061435 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 44846 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:49.063108 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:49.067808 systemd-logind[1433]: New session 17 of user core. Jun 25 18:45:49.083105 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:45:49.209452 sshd[5029]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:49.214341 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:44846.service: Deactivated successfully. Jun 25 18:45:49.217201 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:45:49.218024 systemd-logind[1433]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:45:49.219015 systemd-logind[1433]: Removed session 17. Jun 25 18:45:54.227900 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:44856.service - OpenSSH per-connection server daemon (10.0.0.1:44856). Jun 25 18:45:54.265294 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 44856 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:54.267114 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:54.271358 systemd-logind[1433]: New session 18 of user core. Jun 25 18:45:54.279245 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:45:54.420470 sshd[5045]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:54.430232 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:44856.service: Deactivated successfully. Jun 25 18:45:54.432475 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:45:54.434247 systemd-logind[1433]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:45:54.439286 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:44866.service - OpenSSH per-connection server daemon (10.0.0.1:44866). Jun 25 18:45:54.440455 systemd-logind[1433]: Removed session 18. Jun 25 18:45:54.477995 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 44866 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:54.479614 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:54.485118 systemd-logind[1433]: New session 19 of user core. Jun 25 18:45:54.494167 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:45:54.793188 sshd[5060]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:54.807483 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:44866.service: Deactivated successfully. Jun 25 18:45:54.810074 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:45:54.811668 systemd-logind[1433]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:45:54.813470 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:44868.service - OpenSSH per-connection server daemon (10.0.0.1:44868). Jun 25 18:45:54.814557 systemd-logind[1433]: Removed session 19. Jun 25 18:45:54.859332 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 44868 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:54.861201 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:54.865715 systemd-logind[1433]: New session 20 of user core. Jun 25 18:45:54.875052 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:45:56.760601 sshd[5072]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:56.773200 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:44868.service: Deactivated successfully. Jun 25 18:45:56.775607 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:45:56.779384 systemd-logind[1433]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:45:56.788563 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:45326.service - OpenSSH per-connection server daemon (10.0.0.1:45326). Jun 25 18:45:56.791676 systemd-logind[1433]: Removed session 20. Jun 25 18:45:56.826208 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 45326 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:56.827987 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:56.833478 systemd-logind[1433]: New session 21 of user core. Jun 25 18:45:56.842147 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:45:57.093838 sshd[5107]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:57.105448 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:45326.service: Deactivated successfully. Jun 25 18:45:57.107508 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:45:57.110442 systemd-logind[1433]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:45:57.120608 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:45328.service - OpenSSH per-connection server daemon (10.0.0.1:45328). Jun 25 18:45:57.121615 systemd-logind[1433]: Removed session 21. Jun 25 18:45:57.153283 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 45328 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:45:57.155165 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:57.159775 systemd-logind[1433]: New session 22 of user core. Jun 25 18:45:57.164011 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:45:57.267654 sshd[5120]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:57.271396 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:45328.service: Deactivated successfully. Jun 25 18:45:57.274201 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:45:57.274945 systemd-logind[1433]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:45:57.275828 systemd-logind[1433]: Removed session 22. Jun 25 18:46:02.291898 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:45336.service - OpenSSH per-connection server daemon (10.0.0.1:45336). Jun 25 18:46:02.332139 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 45336 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:46:02.333927 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:02.338122 systemd-logind[1433]: New session 23 of user core. Jun 25 18:46:02.352094 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:46:02.479715 sshd[5158]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:02.484040 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:45336.service: Deactivated successfully. Jun 25 18:46:02.486427 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:46:02.487247 systemd-logind[1433]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:46:02.488136 systemd-logind[1433]: Removed session 23. Jun 25 18:46:06.117392 kubelet[2552]: E0625 18:46:06.117290 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:07.505215 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:39652.service - OpenSSH per-connection server daemon (10.0.0.1:39652). Jun 25 18:46:07.540473 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 39652 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:46:07.542421 sshd[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:07.547949 systemd-logind[1433]: New session 24 of user core. Jun 25 18:46:07.553020 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:46:07.668032 sshd[5183]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:07.673673 systemd-logind[1433]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:46:07.675172 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:39652.service: Deactivated successfully. Jun 25 18:46:07.678627 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:46:07.680096 systemd-logind[1433]: Removed session 24. Jun 25 18:46:08.224413 kubelet[2552]: E0625 18:46:08.224365 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:08.236161 kubelet[2552]: I0625 18:46:08.236097 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fnjft" podStartSLOduration=62.413986052 podStartE2EDuration="1m8.236078143s" podCreationTimestamp="2024-06-25 18:45:00 +0000 UTC" firstStartedPulling="2024-06-25 18:45:33.879691113 +0000 UTC m=+54.855917504" lastFinishedPulling="2024-06-25 18:45:39.701783214 +0000 UTC m=+60.678009595" observedRunningTime="2024-06-25 18:45:40.401098486 +0000 UTC m=+61.377324887" watchObservedRunningTime="2024-06-25 18:46:08.236078143 +0000 UTC m=+89.212304534" Jun 25 18:46:10.120429 kubelet[2552]: E0625 18:46:10.120063 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:10.193225 kubelet[2552]: I0625 18:46:10.192224 2552 topology_manager.go:215] "Topology Admit Handler" podUID="ea25c3b9-06a3-4edc-9921-31ab202856c6" podNamespace="calico-apiserver" podName="calico-apiserver-9b5798c46-mjjr8" Jun 25 18:46:10.203227 systemd[1]: Created slice kubepods-besteffort-podea25c3b9_06a3_4edc_9921_31ab202856c6.slice - libcontainer container kubepods-besteffort-podea25c3b9_06a3_4edc_9921_31ab202856c6.slice. Jun 25 18:46:10.314617 kubelet[2552]: I0625 18:46:10.314517 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkzq\" (UniqueName: \"kubernetes.io/projected/ea25c3b9-06a3-4edc-9921-31ab202856c6-kube-api-access-5tkzq\") pod \"calico-apiserver-9b5798c46-mjjr8\" (UID: \"ea25c3b9-06a3-4edc-9921-31ab202856c6\") " pod="calico-apiserver/calico-apiserver-9b5798c46-mjjr8" Jun 25 18:46:10.314617 kubelet[2552]: I0625 18:46:10.314577 2552 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea25c3b9-06a3-4edc-9921-31ab202856c6-calico-apiserver-certs\") pod \"calico-apiserver-9b5798c46-mjjr8\" (UID: \"ea25c3b9-06a3-4edc-9921-31ab202856c6\") " pod="calico-apiserver/calico-apiserver-9b5798c46-mjjr8" Jun 25 18:46:10.508411 containerd[1452]: time="2024-06-25T18:46:10.508357462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5798c46-mjjr8,Uid:ea25c3b9-06a3-4edc-9921-31ab202856c6,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:46:10.689458 systemd-networkd[1387]: caliccb28e001c7: Link UP Jun 25 18:46:10.690098 systemd-networkd[1387]: caliccb28e001c7: Gained carrier Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.626 [INFO][5229] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0 calico-apiserver-9b5798c46- calico-apiserver ea25c3b9-06a3-4edc-9921-31ab202856c6 1128 0 2024-06-25 18:46:10 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b5798c46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9b5798c46-mjjr8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccb28e001c7 [] []}} ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.627 [INFO][5229] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.650 [INFO][5241] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" HandleID="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Workload="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.660 [INFO][5241] ipam_plugin.go 264: Auto assigning IP ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" HandleID="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Workload="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9b5798c46-mjjr8", "timestamp":"2024-06-25 18:46:10.650725795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.660 [INFO][5241] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.660 [INFO][5241] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.660 [INFO][5241] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.662 [INFO][5241] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.667 [INFO][5241] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.671 [INFO][5241] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.672 [INFO][5241] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.675 [INFO][5241] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.675 [INFO][5241] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.677 [INFO][5241] ipam.go 1685: Creating new handle: k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.679 [INFO][5241] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.684 [INFO][5241] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.684 [INFO][5241] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" host="localhost" Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.684 [INFO][5241] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:46:10.701744 containerd[1452]: 2024-06-25 18:46:10.684 [INFO][5241] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" HandleID="k8s-pod-network.99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Workload="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" Jun 25 18:46:10.702402 containerd[1452]: 2024-06-25 18:46:10.686 [INFO][5229] k8s.go 386: Populated endpoint ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0", GenerateName:"calico-apiserver-9b5798c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea25c3b9-06a3-4edc-9921-31ab202856c6", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 46, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5798c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9b5798c46-mjjr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccb28e001c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:46:10.702402 containerd[1452]: 2024-06-25 18:46:10.686 [INFO][5229] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" Jun 25 18:46:10.702402 containerd[1452]: 2024-06-25 18:46:10.686 [INFO][5229] dataplane_linux.go 68: Setting the host side veth name to caliccb28e001c7 ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" Jun 25 18:46:10.702402 containerd[1452]: 2024-06-25 18:46:10.689 [INFO][5229] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" Jun 25 18:46:10.702402 containerd[1452]: 2024-06-25 18:46:10.690 [INFO][5229] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0", GenerateName:"calico-apiserver-9b5798c46-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea25c3b9-06a3-4edc-9921-31ab202856c6", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 46, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5798c46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a", Pod:"calico-apiserver-9b5798c46-mjjr8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccb28e001c7", MAC:"66:32:64:36:47:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:46:10.702402 containerd[1452]: 2024-06-25 18:46:10.699 [INFO][5229] k8s.go 500: Wrote updated endpoint to datastore ContainerID="99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a" Namespace="calico-apiserver" Pod="calico-apiserver-9b5798c46-mjjr8" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5798c46--mjjr8-eth0" Jun 25 18:46:10.725360 containerd[1452]: time="2024-06-25T18:46:10.725254864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:46:10.725821 containerd[1452]: time="2024-06-25T18:46:10.725769904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:46:10.725821 containerd[1452]: time="2024-06-25T18:46:10.725789972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:46:10.725821 containerd[1452]: time="2024-06-25T18:46:10.725799220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:46:10.748083 systemd[1]: Started cri-containerd-99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a.scope - libcontainer container 99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a. Jun 25 18:46:10.764166 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 18:46:10.793248 containerd[1452]: time="2024-06-25T18:46:10.793198316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5798c46-mjjr8,Uid:ea25c3b9-06a3-4edc-9921-31ab202856c6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a\"" Jun 25 18:46:10.795260 containerd[1452]: time="2024-06-25T18:46:10.795214946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:46:12.450149 systemd-networkd[1387]: caliccb28e001c7: Gained IPv6LL Jun 25 18:46:12.682925 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:39662.service - OpenSSH per-connection server daemon (10.0.0.1:39662). Jun 25 18:46:12.727118 sshd[5309]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:46:12.729578 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:12.736987 systemd-logind[1433]: New session 25 of user core. Jun 25 18:46:12.749020 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:46:12.870436 sshd[5309]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:12.874971 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:39662.service: Deactivated successfully. Jun 25 18:46:12.877385 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:46:12.878072 systemd-logind[1433]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:46:12.879000 systemd-logind[1433]: Removed session 25. Jun 25 18:46:13.117941 kubelet[2552]: E0625 18:46:13.117816 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:13.573593 containerd[1452]: time="2024-06-25T18:46:13.573506797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:46:13.574536 containerd[1452]: time="2024-06-25T18:46:13.574487820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:46:13.576563 containerd[1452]: time="2024-06-25T18:46:13.576521475Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:46:13.579383 containerd[1452]: time="2024-06-25T18:46:13.579344267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:46:13.580062 containerd[1452]: time="2024-06-25T18:46:13.580039415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.784786707s" Jun 25 18:46:13.580122 containerd[1452]: time="2024-06-25T18:46:13.580065504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:46:13.582185 containerd[1452]: time="2024-06-25T18:46:13.582155196Z" level=info msg="CreateContainer within sandbox \"99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:46:13.595359 containerd[1452]: time="2024-06-25T18:46:13.595308670Z" level=info msg="CreateContainer within sandbox \"99817685cd3516d67e1251b6aef5e587ead8f4bea06cebe66db409d69588207a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aa50443e876515a205ed5179fbbe56fb35e151f26376f332c0f4bfac958b7544\"" Jun 25 18:46:13.595825 containerd[1452]: time="2024-06-25T18:46:13.595802884Z" level=info msg="StartContainer for \"aa50443e876515a205ed5179fbbe56fb35e151f26376f332c0f4bfac958b7544\"" Jun 25 18:46:13.633110 systemd[1]: Started cri-containerd-aa50443e876515a205ed5179fbbe56fb35e151f26376f332c0f4bfac958b7544.scope - libcontainer container aa50443e876515a205ed5179fbbe56fb35e151f26376f332c0f4bfac958b7544. Jun 25 18:46:13.776582 containerd[1452]: time="2024-06-25T18:46:13.776522854Z" level=info msg="StartContainer for \"aa50443e876515a205ed5179fbbe56fb35e151f26376f332c0f4bfac958b7544\" returns successfully" Jun 25 18:46:14.551719 kubelet[2552]: I0625 18:46:14.551641 2552 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b5798c46-mjjr8" podStartSLOduration=1.765627792 podStartE2EDuration="4.551624909s" podCreationTimestamp="2024-06-25 18:46:10 +0000 UTC" firstStartedPulling="2024-06-25 18:46:10.794842126 +0000 UTC m=+91.771068517" lastFinishedPulling="2024-06-25 18:46:13.580839243 +0000 UTC m=+94.557065634" observedRunningTime="2024-06-25 18:46:14.524362849 +0000 UTC m=+95.500589240" watchObservedRunningTime="2024-06-25 18:46:14.551624909 +0000 UTC m=+95.527851300" Jun 25 18:46:17.891100 systemd[1]: Started sshd@25-10.0.0.116:22-10.0.0.1:55654.service - OpenSSH per-connection server daemon (10.0.0.1:55654). Jun 25 18:46:17.930575 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 55654 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:46:17.932464 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:17.937823 systemd-logind[1433]: New session 26 of user core. Jun 25 18:46:17.951140 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:46:18.088319 sshd[5398]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:18.092361 systemd[1]: sshd@25-10.0.0.116:22-10.0.0.1:55654.service: Deactivated successfully. Jun 25 18:46:18.094688 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:46:18.095448 systemd-logind[1433]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:46:18.096446 systemd-logind[1433]: Removed session 26. Jun 25 18:46:21.117471 kubelet[2552]: E0625 18:46:21.117436 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 18:46:23.100980 systemd[1]: Started sshd@26-10.0.0.116:22-10.0.0.1:55658.service - OpenSSH per-connection server daemon (10.0.0.1:55658). Jun 25 18:46:23.140204 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 55658 ssh2: RSA SHA256:aOL7xLJVSGgo2ACgb9Q96KiqqB5PNY5rPU/3iN9wkOM Jun 25 18:46:23.142002 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:23.146637 systemd-logind[1433]: New session 27 of user core. Jun 25 18:46:23.161116 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:46:23.285288 sshd[5420]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:23.290074 systemd[1]: sshd@26-10.0.0.116:22-10.0.0.1:55658.service: Deactivated successfully. Jun 25 18:46:23.292215 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:46:23.292946 systemd-logind[1433]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:46:23.293791 systemd-logind[1433]: Removed session 27. Jun 25 18:46:24.117992 kubelet[2552]: E0625 18:46:24.117864 2552 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"