Oct 8 19:51:05.967435 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:51:05.967456 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:05.967468 kernel: BIOS-provided physical RAM map: Oct 8 19:51:05.967474 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 8 19:51:05.967481 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 8 19:51:05.967487 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 8 19:51:05.967494 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 8 19:51:05.967501 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 8 19:51:05.967507 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 19:51:05.967516 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 8 19:51:05.967526 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 8 19:51:05.967532 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 8 19:51:05.967539 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 8 19:51:05.967545 kernel: NX (Execute Disable) protection: active Oct 8 19:51:05.967553 kernel: APIC: Static calls initialized Oct 8 19:51:05.967563 kernel: SMBIOS 2.8 present. Oct 8 19:51:05.967584 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 8 19:51:05.967591 kernel: Hypervisor detected: KVM Oct 8 19:51:05.967598 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:51:05.967605 kernel: kvm-clock: using sched offset of 2794900067 cycles Oct 8 19:51:05.967612 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:51:05.967619 kernel: tsc: Detected 2794.748 MHz processor Oct 8 19:51:05.967627 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:51:05.967634 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:51:05.967644 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 8 19:51:05.967651 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 8 19:51:05.967658 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:51:05.967665 kernel: Using GB pages for direct mapping Oct 8 19:51:05.967672 kernel: ACPI: Early table checksum verification disabled Oct 8 19:51:05.967679 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 8 19:51:05.967686 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:05.967694 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:05.967701 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:05.967710 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 8 19:51:05.967717 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:05.967724 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:05.967732 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:05.967739 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:51:05.967746 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Oct 8 19:51:05.967753 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Oct 8 19:51:05.967766 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 8 19:51:05.967776 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Oct 8 19:51:05.967783 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Oct 8 19:51:05.967790 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Oct 8 19:51:05.967798 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Oct 8 19:51:05.967805 kernel: No NUMA configuration found Oct 8 19:51:05.967812 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 8 19:51:05.967822 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 8 19:51:05.967829 kernel: Zone ranges: Oct 8 19:51:05.967836 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:51:05.967844 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 8 19:51:05.967851 kernel: Normal empty Oct 8 19:51:05.967858 kernel: Movable zone start for each node Oct 8 19:51:05.967866 kernel: Early memory node ranges Oct 8 19:51:05.967873 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 8 19:51:05.967880 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 8 19:51:05.967887 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 8 19:51:05.967899 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:51:05.967907 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 8 19:51:05.967951 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 8 19:51:05.967959 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 19:51:05.967966 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:51:05.967981 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 19:51:05.967989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 19:51:05.967996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:51:05.968003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:51:05.968014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:51:05.968021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:51:05.968029 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:51:05.968036 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:51:05.968043 kernel: TSC deadline timer available Oct 8 19:51:05.968054 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 19:51:05.968063 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:51:05.968070 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 19:51:05.968077 kernel: kvm-guest: setup PV sched yield Oct 8 19:51:05.968090 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 8 19:51:05.968097 kernel: Booting paravirtualized kernel on KVM Oct 8 19:51:05.968105 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:51:05.968113 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 19:51:05.968120 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 19:51:05.968127 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 19:51:05.968134 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 19:51:05.968142 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:51:05.968149 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:51:05.968170 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:05.968179 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:51:05.968186 kernel: random: crng init done Oct 8 19:51:05.968193 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:51:05.968201 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:51:05.968208 kernel: Fallback order for Node 0: 0 Oct 8 19:51:05.968215 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 8 19:51:05.968223 kernel: Policy zone: DMA32 Oct 8 19:51:05.968233 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:51:05.968240 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 136904K reserved, 0K cma-reserved) Oct 8 19:51:05.968248 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:51:05.968255 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:51:05.968263 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:51:05.968270 kernel: Dynamic Preempt: voluntary Oct 8 19:51:05.968277 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:51:05.968286 kernel: rcu: RCU event tracing is enabled. Oct 8 19:51:05.968293 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:51:05.968303 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:51:05.968310 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:51:05.968318 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:51:05.968328 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:51:05.968335 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:51:05.968343 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 19:51:05.968350 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:51:05.968357 kernel: Console: colour VGA+ 80x25 Oct 8 19:51:05.968364 kernel: printk: console [ttyS0] enabled Oct 8 19:51:05.968374 kernel: ACPI: Core revision 20230628 Oct 8 19:51:05.968381 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 19:51:05.968389 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:51:05.968396 kernel: x2apic enabled Oct 8 19:51:05.968403 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:51:05.968411 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 19:51:05.968418 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 19:51:05.968426 kernel: kvm-guest: setup PV IPIs Oct 8 19:51:05.968446 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 19:51:05.968453 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 19:51:05.968461 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 19:51:05.968469 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 19:51:05.968479 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 19:51:05.968486 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 19:51:05.968494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:51:05.968502 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:51:05.968510 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:51:05.968520 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:51:05.968528 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 19:51:05.968535 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 19:51:05.968546 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 19:51:05.968554 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 19:51:05.968561 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 19:51:05.968570 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 19:51:05.968578 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 19:51:05.968588 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:51:05.968596 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:51:05.968603 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:51:05.968611 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:51:05.968619 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 19:51:05.968626 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:51:05.968634 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:51:05.968642 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:51:05.968650 kernel: landlock: Up and running. Oct 8 19:51:05.968660 kernel: SELinux: Initializing. Oct 8 19:51:05.968667 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:51:05.968675 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:51:05.968683 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 19:51:05.968690 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:05.968698 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:05.968708 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:51:05.968716 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 19:51:05.968724 kernel: ... version: 0 Oct 8 19:51:05.968734 kernel: ... bit width: 48 Oct 8 19:51:05.968742 kernel: ... generic registers: 6 Oct 8 19:51:05.968749 kernel: ... value mask: 0000ffffffffffff Oct 8 19:51:05.968757 kernel: ... max period: 00007fffffffffff Oct 8 19:51:05.968764 kernel: ... fixed-purpose events: 0 Oct 8 19:51:05.968772 kernel: ... event mask: 000000000000003f Oct 8 19:51:05.968780 kernel: signal: max sigframe size: 1776 Oct 8 19:51:05.968787 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:51:05.968795 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:51:05.968805 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:51:05.968813 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:51:05.968820 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 19:51:05.968828 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:51:05.968840 kernel: smpboot: Max logical packages: 1 Oct 8 19:51:05.968848 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 19:51:05.968856 kernel: devtmpfs: initialized Oct 8 19:51:05.968863 kernel: x86/mm: Memory block size: 128MB Oct 8 19:51:05.968871 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:51:05.968881 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:51:05.968890 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:51:05.968899 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:51:05.968908 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:51:05.968927 kernel: audit: type=2000 audit(1728417064.395:1): state=initialized audit_enabled=0 res=1 Oct 8 19:51:05.968934 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:51:05.968942 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:51:05.968959 kernel: cpuidle: using governor menu Oct 8 19:51:05.968967 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:51:05.968977 kernel: dca service started, version 1.12.1 Oct 8 19:51:05.968985 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 19:51:05.968993 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 19:51:05.969000 kernel: PCI: Using configuration type 1 for base access Oct 8 19:51:05.969008 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:51:05.969016 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:51:05.969024 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:51:05.969031 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:51:05.969039 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:51:05.969049 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:51:05.969057 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:51:05.969064 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:51:05.969072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:51:05.969080 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:51:05.969088 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:51:05.969095 kernel: ACPI: Interpreter enabled Oct 8 19:51:05.969103 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 19:51:05.969111 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:51:05.969121 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:51:05.969129 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:51:05.969137 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 19:51:05.969145 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:51:05.969392 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:51:05.969535 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 19:51:05.969658 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 19:51:05.969672 kernel: PCI host bridge to bus 0000:00 Oct 8 19:51:05.969811 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:51:05.969944 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:51:05.970185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:51:05.970298 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 19:51:05.970409 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 19:51:05.970520 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 8 19:51:05.970636 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:51:05.970787 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 19:51:05.970942 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 19:51:05.971070 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 8 19:51:05.971212 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 8 19:51:05.971337 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 8 19:51:05.971461 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:51:05.971608 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:51:05.971734 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 8 19:51:05.971856 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 8 19:51:05.971995 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 8 19:51:05.972135 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 19:51:05.972270 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 8 19:51:05.972392 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 8 19:51:05.972517 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 8 19:51:05.972651 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 19:51:05.972775 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 8 19:51:05.972897 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 8 19:51:05.973034 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 8 19:51:05.973156 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 8 19:51:05.973334 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 19:51:05.973462 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 19:51:05.973598 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 19:51:05.973721 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 8 19:51:05.973841 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 8 19:51:05.974005 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 19:51:05.974128 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 8 19:51:05.974143 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:51:05.974152 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:51:05.974168 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:51:05.974177 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:51:05.974185 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 19:51:05.974193 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 19:51:05.974201 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 19:51:05.974208 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 19:51:05.974216 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 19:51:05.974227 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 19:51:05.974234 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 19:51:05.974242 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 19:51:05.974250 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 19:51:05.974257 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 19:51:05.974265 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 19:51:05.974273 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 19:51:05.974280 kernel: iommu: Default domain type: Translated Oct 8 19:51:05.974288 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:51:05.974298 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:51:05.974306 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:51:05.974314 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 8 19:51:05.974321 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 8 19:51:05.974442 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 19:51:05.974562 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 19:51:05.974681 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:51:05.974691 kernel: vgaarb: loaded Oct 8 19:51:05.974702 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 19:51:05.974710 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 19:51:05.974718 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:51:05.974726 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:51:05.974734 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:51:05.974742 kernel: pnp: PnP ACPI init Oct 8 19:51:05.974884 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 19:51:05.974896 kernel: pnp: PnP ACPI: found 6 devices Oct 8 19:51:05.974907 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:51:05.974928 kernel: NET: Registered PF_INET protocol family Oct 8 19:51:05.974936 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:51:05.974944 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:51:05.974952 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:51:05.974960 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:51:05.974968 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:51:05.974975 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:51:05.974983 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:51:05.974994 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:51:05.975002 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:51:05.975010 kernel: NET: Registered PF_XDP protocol family Oct 8 19:51:05.975128 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:51:05.975250 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:51:05.975362 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:51:05.975474 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 19:51:05.975586 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 19:51:05.975702 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 8 19:51:05.975713 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:51:05.975721 kernel: Initialise system trusted keyrings Oct 8 19:51:05.975729 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:51:05.975736 kernel: Key type asymmetric registered Oct 8 19:51:05.975744 kernel: Asymmetric key parser 'x509' registered Oct 8 19:51:05.975752 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:51:05.975760 kernel: io scheduler mq-deadline registered Oct 8 19:51:05.975768 kernel: io scheduler kyber registered Oct 8 19:51:05.975775 kernel: io scheduler bfq registered Oct 8 19:51:05.975786 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:51:05.975794 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 19:51:05.975802 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 19:51:05.975810 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 19:51:05.975818 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:51:05.975826 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:51:05.975834 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:51:05.975842 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:51:05.975850 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:51:05.976002 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 19:51:05.976015 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:51:05.976130 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 19:51:05.976254 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T19:51:05 UTC (1728417065) Oct 8 19:51:05.976369 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 19:51:05.976380 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 19:51:05.976388 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:51:05.976400 kernel: Segment Routing with IPv6 Oct 8 19:51:05.976407 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:51:05.976415 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:51:05.976423 kernel: Key type dns_resolver registered Oct 8 19:51:05.976430 kernel: IPI shorthand broadcast: enabled Oct 8 19:51:05.976438 kernel: sched_clock: Marking stable (1030003769, 121283362)->(1234508755, -83221624) Oct 8 19:51:05.976446 kernel: registered taskstats version 1 Oct 8 19:51:05.976454 kernel: Loading compiled-in X.509 certificates Oct 8 19:51:05.976462 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:51:05.976472 kernel: Key type .fscrypt registered Oct 8 19:51:05.976480 kernel: Key type fscrypt-provisioning registered Oct 8 19:51:05.976487 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:51:05.976495 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:51:05.976503 kernel: ima: No architecture policies found Oct 8 19:51:05.976510 kernel: clk: Disabling unused clocks Oct 8 19:51:05.976518 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:51:05.976526 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:51:05.976534 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:51:05.976544 kernel: Run /init as init process Oct 8 19:51:05.976552 kernel: with arguments: Oct 8 19:51:05.976559 kernel: /init Oct 8 19:51:05.976567 kernel: with environment: Oct 8 19:51:05.976574 kernel: HOME=/ Oct 8 19:51:05.976582 kernel: TERM=linux Oct 8 19:51:05.976589 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:51:05.976599 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:51:05.976612 systemd[1]: Detected virtualization kvm. Oct 8 19:51:05.976620 systemd[1]: Detected architecture x86-64. Oct 8 19:51:05.976628 systemd[1]: Running in initrd. Oct 8 19:51:05.976636 systemd[1]: No hostname configured, using default hostname. Oct 8 19:51:05.976644 systemd[1]: Hostname set to . Oct 8 19:51:05.976653 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:51:05.976661 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:51:05.976669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:05.976681 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:05.976689 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:51:05.976733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:51:05.976745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:51:05.976753 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:51:05.976766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:51:05.976775 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:51:05.976783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:05.976792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:05.976800 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:51:05.976809 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:51:05.976818 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:51:05.976826 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:51:05.976838 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:51:05.976846 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:51:05.976855 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:51:05.976864 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:51:05.976873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:05.976882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:51:05.976890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:51:05.976899 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:51:05.976907 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:51:05.976971 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:51:05.976980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:51:05.976989 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:51:05.976998 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:51:05.977007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:51:05.977015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:05.977024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:51:05.977033 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:51:05.977045 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:51:05.977073 systemd-journald[192]: Collecting audit messages is disabled. Oct 8 19:51:05.977096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:51:05.977107 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:51:05.977117 systemd-journald[192]: Journal started Oct 8 19:51:05.977138 systemd-journald[192]: Runtime Journal (/run/log/journal/7eb66b7974ba4cdb9b915952044aa6e9) is 6.0M, max 48.4M, 42.3M free. Oct 8 19:51:05.983987 systemd-modules-load[195]: Inserted module 'overlay' Oct 8 19:51:06.010941 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:51:06.015945 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:51:06.017513 systemd-modules-load[195]: Inserted module 'br_netfilter' Oct 8 19:51:06.018147 kernel: Bridge firewalling registered Oct 8 19:51:06.024105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:51:06.025295 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:51:06.029498 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:06.030364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:06.034059 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:06.037052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:51:06.044113 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:51:06.046816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:51:06.055478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:06.064061 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:51:06.065966 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:06.069666 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:51:06.087405 dracut-cmdline[233]: dracut-dracut-053 Oct 8 19:51:06.090793 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:51:06.099378 systemd-resolved[225]: Positive Trust Anchors: Oct 8 19:51:06.099399 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:51:06.099431 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:51:06.102086 systemd-resolved[225]: Defaulting to hostname 'linux'. Oct 8 19:51:06.103249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:51:06.108599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:06.188961 kernel: SCSI subsystem initialized Oct 8 19:51:06.197951 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:51:06.208953 kernel: iscsi: registered transport (tcp) Oct 8 19:51:06.230364 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:51:06.230435 kernel: QLogic iSCSI HBA Driver Oct 8 19:51:06.284239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:51:06.294055 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:51:06.321047 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:51:06.321101 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:51:06.322102 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:51:06.362952 kernel: raid6: avx2x4 gen() 30130 MB/s Oct 8 19:51:06.379941 kernel: raid6: avx2x2 gen() 31294 MB/s Oct 8 19:51:06.397157 kernel: raid6: avx2x1 gen() 25751 MB/s Oct 8 19:51:06.397180 kernel: raid6: using algorithm avx2x2 gen() 31294 MB/s Oct 8 19:51:06.415008 kernel: raid6: .... xor() 19982 MB/s, rmw enabled Oct 8 19:51:06.415032 kernel: raid6: using avx2x2 recovery algorithm Oct 8 19:51:06.434942 kernel: xor: automatically using best checksumming function avx Oct 8 19:51:06.592978 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:51:06.610868 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:51:06.623116 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:51:06.642090 systemd-udevd[416]: Using default interface naming scheme 'v255'. Oct 8 19:51:06.648722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:51:06.656116 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:51:06.670475 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Oct 8 19:51:06.710829 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:51:06.718112 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:51:06.791901 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:06.804108 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:51:06.816604 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:51:06.820406 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:51:06.823363 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:06.825996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:51:06.836149 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:51:06.838782 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:51:06.848970 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 19:51:06.849356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:51:06.855225 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:51:06.860894 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:51:06.860944 kernel: GPT:9289727 != 19775487 Oct 8 19:51:06.860961 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:51:06.860976 kernel: GPT:9289727 != 19775487 Oct 8 19:51:06.860990 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:51:06.861004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:06.860319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:51:06.860470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:06.864612 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:06.867144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:51:06.868482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:06.870932 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:06.880938 kernel: libata version 3.00 loaded. Oct 8 19:51:06.882374 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:51:06.882396 kernel: AES CTR mode by8 optimization enabled Oct 8 19:51:06.883048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:06.895948 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 19:51:06.896222 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 19:51:06.900210 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 19:51:06.900443 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Oct 8 19:51:06.900455 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 19:51:06.908967 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (465) Oct 8 19:51:06.911984 kernel: scsi host0: ahci Oct 8 19:51:06.912937 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:51:06.948668 kernel: scsi host1: ahci Oct 8 19:51:06.948968 kernel: scsi host2: ahci Oct 8 19:51:06.949212 kernel: scsi host3: ahci Oct 8 19:51:06.949405 kernel: scsi host4: ahci Oct 8 19:51:06.949594 kernel: scsi host5: ahci Oct 8 19:51:06.949783 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 8 19:51:06.949800 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 8 19:51:06.949820 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 8 19:51:06.949835 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 8 19:51:06.949849 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 8 19:51:06.949863 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 8 19:51:06.951034 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:06.964971 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:51:06.971247 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:51:06.976633 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:51:06.977925 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:51:06.995249 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:51:06.996465 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:51:07.006108 disk-uuid[554]: Primary Header is updated. Oct 8 19:51:07.006108 disk-uuid[554]: Secondary Entries is updated. Oct 8 19:51:07.006108 disk-uuid[554]: Secondary Header is updated. Oct 8 19:51:07.009772 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:07.015970 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:07.018902 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:07.228937 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 19:51:07.229029 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:07.229041 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:07.229051 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:07.229969 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 19:51:07.230084 kernel: ata3.00: applying bridge limits Oct 8 19:51:07.230949 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:07.231953 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 19:51:07.232960 kernel: ata3.00: configured for UDMA/100 Oct 8 19:51:07.232990 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:51:07.302977 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 19:51:07.303457 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:51:07.317480 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:51:08.014945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:51:08.016002 disk-uuid[558]: The operation has completed successfully. Oct 8 19:51:08.043864 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:51:08.044005 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:51:08.076151 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:51:08.081516 sh[591]: Success Oct 8 19:51:08.133961 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 19:51:08.173802 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:51:08.187795 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:51:08.190239 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:51:08.208292 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:51:08.208324 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:08.208336 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:51:08.210565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:51:08.210581 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:51:08.215928 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:51:08.218298 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:51:08.225050 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:51:08.226156 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:51:08.241645 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:08.241690 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:08.241702 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:08.244996 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:08.254617 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:51:08.256531 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:08.265870 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:51:08.273118 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:51:08.419161 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:51:08.427111 ignition[689]: Ignition 2.19.0 Oct 8 19:51:08.427124 ignition[689]: Stage: fetch-offline Oct 8 19:51:08.429137 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:51:08.427161 ignition[689]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:08.427189 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:08.427323 ignition[689]: parsed url from cmdline: "" Oct 8 19:51:08.427327 ignition[689]: no config URL provided Oct 8 19:51:08.427332 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:51:08.427342 ignition[689]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:51:08.427370 ignition[689]: op(1): [started] loading QEMU firmware config module Oct 8 19:51:08.427376 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:51:08.443632 ignition[689]: op(1): [finished] loading QEMU firmware config module Oct 8 19:51:08.454641 systemd-networkd[778]: lo: Link UP Oct 8 19:51:08.454652 systemd-networkd[778]: lo: Gained carrier Oct 8 19:51:08.456501 systemd-networkd[778]: Enumeration completed Oct 8 19:51:08.457022 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:51:08.457490 systemd[1]: Reached target network.target - Network. Oct 8 19:51:08.457883 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:08.457888 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:51:08.459248 systemd-networkd[778]: eth0: Link UP Oct 8 19:51:08.459252 systemd-networkd[778]: eth0: Gained carrier Oct 8 19:51:08.459258 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:08.483971 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:51:08.500681 ignition[689]: parsing config with SHA512: ee8d1013ed4e7a4b363abad4c08ac90c2708290835869a10a6a8c25eee7f7a6180d6b9aa279cda9f5852fc0d3c632b757ddd247a9bc097f89773633d6cc2f2c0 Oct 8 19:51:08.506102 unknown[689]: fetched base config from "system" Oct 8 19:51:08.506118 unknown[689]: fetched user config from "qemu" Oct 8 19:51:08.506771 ignition[689]: fetch-offline: fetch-offline passed Oct 8 19:51:08.506869 ignition[689]: Ignition finished successfully Oct 8 19:51:08.509524 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:51:08.511362 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:51:08.519434 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:51:08.562304 ignition[784]: Ignition 2.19.0 Oct 8 19:51:08.562317 ignition[784]: Stage: kargs Oct 8 19:51:08.562487 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:08.562499 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:08.566290 ignition[784]: kargs: kargs passed Oct 8 19:51:08.566962 ignition[784]: Ignition finished successfully Oct 8 19:51:08.571027 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:51:08.583070 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:51:08.625418 ignition[792]: Ignition 2.19.0 Oct 8 19:51:08.625430 ignition[792]: Stage: disks Oct 8 19:51:08.625612 ignition[792]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:08.625625 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:08.628557 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:51:08.626434 ignition[792]: disks: disks passed Oct 8 19:51:08.630369 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:51:08.626481 ignition[792]: Ignition finished successfully Oct 8 19:51:08.632388 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:51:08.634330 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:51:08.634732 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:51:08.635241 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:51:08.642055 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:51:08.655864 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:51:08.664146 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:51:08.676076 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:51:08.778037 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:51:08.778614 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:51:08.780198 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:51:08.793038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:51:08.795320 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:51:08.796707 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:51:08.796747 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:51:08.807365 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Oct 8 19:51:08.807393 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:08.807408 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:08.796769 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:51:08.813809 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:08.813839 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:08.805382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:51:08.811658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:51:08.815016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:51:08.869216 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:51:08.876014 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:51:08.882240 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:51:08.886107 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:51:08.982330 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:51:08.995006 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:51:08.996962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:51:09.004942 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:09.024232 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:51:09.037575 ignition[923]: INFO : Ignition 2.19.0 Oct 8 19:51:09.037575 ignition[923]: INFO : Stage: mount Oct 8 19:51:09.039450 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:09.039450 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:09.039450 ignition[923]: INFO : mount: mount passed Oct 8 19:51:09.039450 ignition[923]: INFO : Ignition finished successfully Oct 8 19:51:09.044196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:51:09.064157 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:51:09.206999 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:51:09.219253 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:51:09.227942 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Oct 8 19:51:09.229999 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:51:09.230041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:51:09.230056 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:51:09.233939 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:51:09.235656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:51:09.290141 ignition[955]: INFO : Ignition 2.19.0 Oct 8 19:51:09.290141 ignition[955]: INFO : Stage: files Oct 8 19:51:09.292395 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:09.292395 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:09.292395 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:51:09.292395 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:51:09.292395 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:51:09.321799 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:51:09.321799 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:51:09.321799 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:51:09.321799 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:51:09.321799 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:51:09.295205 unknown[955]: wrote ssh authorized keys file for user: core Oct 8 19:51:09.371888 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:51:09.633728 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:51:09.633728 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:09.637737 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 8 19:51:10.044193 systemd-networkd[778]: eth0: Gained IPv6LL Oct 8 19:51:10.131648 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:51:10.791490 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:51:10.791490 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:51:10.796008 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:51:10.821472 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:51:10.829395 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:51:10.831019 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:51:10.831019 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:51:10.831019 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:51:10.831019 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:51:10.831019 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:51:10.831019 ignition[955]: INFO : files: files passed Oct 8 19:51:10.831019 ignition[955]: INFO : Ignition finished successfully Oct 8 19:51:10.832705 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:51:10.851322 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:51:10.854475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:51:10.857411 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:51:10.858714 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:51:10.864933 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:51:10.869309 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:10.869309 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:10.872993 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:51:10.877255 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:51:10.880453 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:51:10.898283 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:51:10.942387 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:51:10.942543 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:51:10.948851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:51:10.951580 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:51:10.952001 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:51:10.956655 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:51:10.978817 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:51:10.981440 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:51:10.998123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:10.998657 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:11.001616 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:51:11.004283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:51:11.004436 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:51:11.007576 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:51:11.008339 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:51:11.008619 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:51:11.008954 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:51:11.014509 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:51:11.014841 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:51:11.015339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:51:11.020968 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:51:11.021487 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:51:11.021813 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:51:11.022294 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:51:11.022467 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:51:11.030936 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:11.031564 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:11.034580 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:51:11.036590 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:11.064654 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:51:11.064843 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:51:11.067559 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:51:11.067767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:51:11.071032 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:51:11.073752 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:51:11.079059 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:11.090788 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:51:11.091243 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:51:11.091611 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:51:11.091751 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:51:11.097298 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:51:11.097395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:51:11.099303 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:51:11.099426 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:51:11.101490 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:51:11.101605 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:51:11.118285 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:51:11.118885 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:51:11.119108 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:51:11.120504 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:51:11.123161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:51:11.123362 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:11.125350 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:51:11.125499 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:51:11.136257 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:51:11.137396 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:51:11.144826 ignition[1011]: INFO : Ignition 2.19.0 Oct 8 19:51:11.144826 ignition[1011]: INFO : Stage: umount Oct 8 19:51:11.146552 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:51:11.146552 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:51:11.146552 ignition[1011]: INFO : umount: umount passed Oct 8 19:51:11.146552 ignition[1011]: INFO : Ignition finished successfully Oct 8 19:51:11.160784 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:51:11.162355 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:51:11.163435 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:51:11.165698 systemd[1]: Stopped target network.target - Network. Oct 8 19:51:11.167625 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:51:11.168717 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:51:11.181223 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:51:11.181279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:51:11.184396 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:51:11.184453 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:51:11.187273 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:51:11.187327 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:51:11.190515 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:51:11.192742 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:51:11.204808 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:51:11.211222 systemd-networkd[778]: eth0: DHCPv6 lease lost Oct 8 19:51:11.211887 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:51:11.216190 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:51:11.217228 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:51:11.220460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:51:11.220541 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:11.234120 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:51:11.234591 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:51:11.234675 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:51:11.235030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:51:11.235083 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:11.235509 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:51:11.235569 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:11.235900 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:51:11.235977 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:51:11.236551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:51:11.278316 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:51:11.278521 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:51:11.280843 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:51:11.281088 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:51:11.283518 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:51:11.283623 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:51:11.286135 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:51:11.286187 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:51:11.288206 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:51:11.288271 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:51:11.290593 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:51:11.290652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:51:11.292702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:51:11.292761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:51:11.306080 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:51:11.308330 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:51:11.308397 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:51:11.310743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:51:11.310803 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:11.315592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:51:11.315750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:51:11.666022 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:51:11.666200 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:51:11.668987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:51:11.671126 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:51:11.671230 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:51:11.683302 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:51:11.693330 systemd[1]: Switching root. Oct 8 19:51:11.728047 systemd-journald[192]: Journal stopped Oct 8 19:51:12.962142 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Oct 8 19:51:12.963405 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:51:12.963434 kernel: SELinux: policy capability open_perms=1 Oct 8 19:51:12.963449 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:51:12.963464 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:51:12.963484 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:51:12.963504 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:51:12.963519 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:51:12.963536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:51:12.963553 kernel: audit: type=1403 audit(1728417072.151:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:51:12.963572 systemd[1]: Successfully loaded SELinux policy in 44.934ms. Oct 8 19:51:12.963607 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.631ms. Oct 8 19:51:12.963625 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:51:12.963643 systemd[1]: Detected virtualization kvm. Oct 8 19:51:12.963663 systemd[1]: Detected architecture x86-64. Oct 8 19:51:12.963680 systemd[1]: Detected first boot. Oct 8 19:51:12.963697 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:51:12.963714 zram_generator::config[1056]: No configuration found. Oct 8 19:51:12.963732 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:51:12.963749 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:51:12.963773 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:51:12.963791 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:51:12.963812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:51:12.963828 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:51:12.963844 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:51:12.963866 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:51:12.963882 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:51:12.963899 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:51:12.963930 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:51:12.963985 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:51:12.964007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:51:12.964025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:51:12.964042 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:51:12.964059 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:51:12.964076 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:51:12.964094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:51:12.964110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:51:12.964127 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:51:12.964144 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:51:12.964163 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:51:12.964180 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:51:12.964196 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:51:12.964212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:51:12.964230 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:51:12.964247 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:51:12.964263 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:51:12.964280 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:51:12.964299 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:51:12.964316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:51:12.964333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:51:12.964358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:51:12.964374 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:51:12.964393 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:51:12.964409 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:51:12.964424 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:51:12.965533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:12.965556 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:51:12.965568 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:51:12.965580 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:51:12.965593 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:51:12.965605 systemd[1]: Reached target machines.target - Containers. Oct 8 19:51:12.965617 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:51:12.965629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:51:12.965641 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:51:12.965653 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:51:12.965668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:51:12.965680 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:51:12.965692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:51:12.965703 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:51:12.965715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:51:12.965727 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:51:12.965757 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:51:12.965769 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:51:12.965784 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:51:12.965796 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:51:12.965807 kernel: loop: module loaded Oct 8 19:51:12.965819 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:51:12.965831 kernel: fuse: init (API version 7.39) Oct 8 19:51:12.965842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:51:12.965855 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:51:12.965867 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:51:12.965879 kernel: ACPI: bus type drm_connector registered Oct 8 19:51:12.965893 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:51:12.965905 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:51:12.965942 systemd[1]: Stopped verity-setup.service. Oct 8 19:51:12.965969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:12.966011 systemd-journald[1126]: Collecting audit messages is disabled. Oct 8 19:51:12.966034 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:51:12.966046 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:51:12.966063 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:51:12.966075 systemd-journald[1126]: Journal started Oct 8 19:51:12.966100 systemd-journald[1126]: Runtime Journal (/run/log/journal/7eb66b7974ba4cdb9b915952044aa6e9) is 6.0M, max 48.4M, 42.3M free. Oct 8 19:51:12.697930 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:51:12.726224 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:51:12.726786 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:51:12.969899 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:51:12.970499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:51:12.972023 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:51:12.973484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:51:12.975011 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:51:12.976761 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:51:12.978684 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:51:12.978898 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:51:12.980650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:51:12.980843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:51:12.982532 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:51:12.982724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:51:12.984280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:51:12.984457 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:51:12.986257 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:51:12.986437 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:51:12.988050 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:51:12.988246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:51:12.989907 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:51:12.991793 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:51:12.993629 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:51:13.011134 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:51:13.026173 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:51:13.029415 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:51:13.030855 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:51:13.030907 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:51:13.033569 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:51:13.036570 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:51:13.039317 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:51:13.040775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:51:13.043377 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:51:13.046001 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:51:13.047390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:51:13.051797 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:51:13.053374 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:51:13.055945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:51:13.059148 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:51:13.070115 systemd-journald[1126]: Time spent on flushing to /var/log/journal/7eb66b7974ba4cdb9b915952044aa6e9 is 23.357ms for 949 entries. Oct 8 19:51:13.070115 systemd-journald[1126]: System Journal (/var/log/journal/7eb66b7974ba4cdb9b915952044aa6e9) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:51:13.105883 systemd-journald[1126]: Received client request to flush runtime journal. Oct 8 19:51:13.105966 kernel: loop0: detected capacity change from 0 to 210664 Oct 8 19:51:13.072207 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:51:13.075552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:51:13.077428 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:51:13.078803 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:51:13.080620 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:51:13.092494 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:51:13.097267 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:51:13.110357 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:51:13.117076 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:51:13.119331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:51:13.123524 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:51:13.121580 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:51:13.135415 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:51:13.144752 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:51:13.147473 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:51:13.148401 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:51:13.154653 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:51:13.160947 kernel: loop1: detected capacity change from 0 to 142488 Oct 8 19:51:13.174316 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Oct 8 19:51:13.174341 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Oct 8 19:51:13.182261 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:51:13.196981 kernel: loop2: detected capacity change from 0 to 140768 Oct 8 19:51:13.255968 kernel: loop3: detected capacity change from 0 to 210664 Oct 8 19:51:13.267950 kernel: loop4: detected capacity change from 0 to 142488 Oct 8 19:51:13.282964 kernel: loop5: detected capacity change from 0 to 140768 Oct 8 19:51:13.295222 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:51:13.296033 (sd-merge)[1196]: Merged extensions into '/usr'. Oct 8 19:51:13.300253 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:51:13.300273 systemd[1]: Reloading... Oct 8 19:51:13.381965 zram_generator::config[1226]: No configuration found. Oct 8 19:51:13.493957 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:51:13.539285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:51:13.591296 systemd[1]: Reloading finished in 290 ms. Oct 8 19:51:13.625774 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:51:13.627547 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:51:13.642180 systemd[1]: Starting ensure-sysext.service... Oct 8 19:51:13.644461 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:51:13.661048 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:51:13.661070 systemd[1]: Reloading... Oct 8 19:51:13.685315 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:51:13.685677 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:51:13.686668 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:51:13.686981 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Oct 8 19:51:13.687056 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Oct 8 19:51:13.694790 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:51:13.694805 systemd-tmpfiles[1260]: Skipping /boot Oct 8 19:51:13.723964 zram_generator::config[1286]: No configuration found. Oct 8 19:51:13.738135 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:51:13.738289 systemd-tmpfiles[1260]: Skipping /boot Oct 8 19:51:13.867064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:51:13.922549 systemd[1]: Reloading finished in 260 ms. Oct 8 19:51:13.950038 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:51:13.968987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:51:13.981813 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:51:13.985330 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:51:13.988725 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:51:13.994173 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:51:13.998243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:51:14.002000 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:51:14.006407 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.006651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:51:14.011700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:51:14.018311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:51:14.022011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:51:14.023595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:51:14.026250 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:51:14.027996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.029151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:51:14.029344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:51:14.034146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:51:14.034355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:51:14.036301 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:51:14.036577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:51:14.042272 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:51:14.042788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:51:14.044127 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:51:14.044440 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Oct 8 19:51:14.050272 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:51:14.054197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.054402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:51:14.062629 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:51:14.068575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:51:14.072718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:51:14.074296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:51:14.079232 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:51:14.080682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.082620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:51:14.084243 augenrules[1357]: No rules Oct 8 19:51:14.084562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:51:14.087239 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:51:14.092657 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:51:14.092889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:51:14.098886 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:51:14.100697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:51:14.103235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:51:14.103580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:51:14.111550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.111780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:51:14.123229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:51:14.127393 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:51:14.134181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:51:14.135710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:51:14.148246 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:51:14.149563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:51:14.149723 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:51:14.150703 systemd[1]: Finished ensure-sysext.service. Oct 8 19:51:14.153663 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:51:14.155580 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:51:14.157472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:51:14.157763 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:51:14.161466 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:51:14.163161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:51:14.165312 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:51:14.166058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:51:14.185779 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:51:14.188021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:51:14.191963 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1369) Oct 8 19:51:14.192283 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:51:14.194019 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:51:14.196986 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1369) Oct 8 19:51:14.251388 systemd-resolved[1329]: Positive Trust Anchors: Oct 8 19:51:14.251415 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:51:14.251447 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:51:14.269972 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 19:51:14.275254 systemd-resolved[1329]: Defaulting to hostname 'linux'. Oct 8 19:51:14.277290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:51:14.279054 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:51:14.287212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:51:14.290958 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1393) Oct 8 19:51:14.297266 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:51:14.309942 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:51:14.318766 systemd-networkd[1394]: lo: Link UP Oct 8 19:51:14.318780 systemd-networkd[1394]: lo: Gained carrier Oct 8 19:51:14.320633 systemd-networkd[1394]: Enumeration completed Oct 8 19:51:14.321080 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:14.321093 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:51:14.321959 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:51:14.322357 systemd[1]: Reached target network.target - Network. Oct 8 19:51:14.322582 systemd-networkd[1394]: eth0: Link UP Oct 8 19:51:14.322599 systemd-networkd[1394]: eth0: Gained carrier Oct 8 19:51:14.322628 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:51:14.336189 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 19:51:14.336503 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 19:51:14.336707 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 19:51:14.336722 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 19:51:14.337288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:51:14.339042 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:51:14.339322 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:51:14.353387 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:51:14.355389 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:51:14.359032 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:51:14.359159 systemd-timesyncd[1402]: Initial clock synchronization to Tue 2024-10-08 19:51:14.299256 UTC. Oct 8 19:51:14.401959 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:51:14.483319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:51:14.601698 kernel: kvm_amd: TSC scaling supported Oct 8 19:51:14.601783 kernel: kvm_amd: Nested Virtualization enabled Oct 8 19:51:14.601803 kernel: kvm_amd: Nested Paging enabled Oct 8 19:51:14.601849 kernel: kvm_amd: LBR virtualization supported Oct 8 19:51:14.602500 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 19:51:14.604119 kernel: kvm_amd: Virtual GIF supported Oct 8 19:51:14.626037 kernel: EDAC MC: Ver: 3.0.0 Oct 8 19:51:14.665286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:51:14.670357 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:51:14.681490 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:51:14.694441 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:51:14.730801 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:51:14.732517 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:51:14.733908 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:51:14.735254 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:51:14.736558 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:51:14.738140 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:51:14.739456 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:51:14.740747 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:51:14.742036 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:51:14.742077 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:51:14.743012 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:51:14.744717 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:51:14.747567 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:51:14.757965 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:51:14.770255 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:51:14.772009 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:51:14.773272 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:51:14.774303 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:51:14.775349 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:51:14.775383 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:51:14.776455 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:51:14.778952 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:51:14.782960 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:51:14.783354 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:51:14.788542 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:51:14.794579 jq[1434]: false Oct 8 19:51:14.798406 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:51:14.800098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:51:14.803071 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:51:14.806079 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:51:14.825078 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:51:14.834168 dbus-daemon[1433]: [system] SELinux support is enabled Oct 8 19:51:14.836210 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:51:14.849070 extend-filesystems[1435]: Found loop3 Oct 8 19:51:14.849070 extend-filesystems[1435]: Found loop4 Oct 8 19:51:14.849070 extend-filesystems[1435]: Found loop5 Oct 8 19:51:14.849070 extend-filesystems[1435]: Found sr0 Oct 8 19:51:14.849070 extend-filesystems[1435]: Found vda Oct 8 19:51:14.849070 extend-filesystems[1435]: Found vda1 Oct 8 19:51:14.849070 extend-filesystems[1435]: Found vda2 Oct 8 19:51:14.849070 extend-filesystems[1435]: Found vda3 Oct 8 19:51:14.849070 extend-filesystems[1435]: Found usr Oct 8 19:51:14.849070 extend-filesystems[1435]: Found vda4 Oct 8 19:51:14.887490 extend-filesystems[1435]: Found vda6 Oct 8 19:51:14.887490 extend-filesystems[1435]: Found vda7 Oct 8 19:51:14.887490 extend-filesystems[1435]: Found vda9 Oct 8 19:51:14.887490 extend-filesystems[1435]: Checking size of /dev/vda9 Oct 8 19:51:14.849385 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:51:14.850079 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:51:14.894255 jq[1453]: true Oct 8 19:51:14.851789 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:51:14.882055 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:51:14.885631 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:51:14.892118 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:51:14.895660 extend-filesystems[1435]: Resized partition /dev/vda9 Oct 8 19:51:14.898559 update_engine[1450]: I20241008 19:51:14.898471 1450 main.cc:92] Flatcar Update Engine starting Oct 8 19:51:14.900271 update_engine[1450]: I20241008 19:51:14.900135 1450 update_check_scheduler.cc:74] Next update check in 10m53s Oct 8 19:51:14.902039 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:51:14.909183 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1383) Oct 8 19:51:14.909210 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:51:14.907727 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:51:14.908107 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:51:14.909599 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:51:14.909883 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:51:14.919207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:51:14.919464 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:51:14.926939 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 19:51:14.926976 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:51:14.931163 systemd-logind[1446]: New seat seat0. Oct 8 19:51:14.947943 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:51:14.950637 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:51:14.954825 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:51:14.983836 jq[1460]: true Oct 8 19:51:14.985502 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:51:14.985502 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:51:14.985502 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:51:15.007328 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Oct 8 19:51:15.004359 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:51:15.004380 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:51:15.004558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:51:15.005339 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:51:15.005471 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:51:15.019257 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:51:15.025518 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:51:15.025936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:51:15.039059 tar[1459]: linux-amd64/helm Oct 8 19:51:15.064978 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:51:15.068073 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:51:15.070078 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:51:15.071465 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:51:15.074069 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:51:15.133508 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:51:15.149330 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:51:15.158385 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:51:15.158729 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:51:15.220270 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:51:15.248191 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:51:15.257202 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:51:15.259861 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:51:15.261574 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:51:15.455271 containerd[1461]: time="2024-10-08T19:51:15.455146668Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:51:15.485870 containerd[1461]: time="2024-10-08T19:51:15.485809564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:15.487955 containerd[1461]: time="2024-10-08T19:51:15.487900215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:15.487955 containerd[1461]: time="2024-10-08T19:51:15.487946922Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:51:15.488030 containerd[1461]: time="2024-10-08T19:51:15.487965289Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:51:15.488209 containerd[1461]: time="2024-10-08T19:51:15.488189619Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:51:15.488240 containerd[1461]: time="2024-10-08T19:51:15.488214499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:15.488326 containerd[1461]: time="2024-10-08T19:51:15.488304960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:15.488346 containerd[1461]: time="2024-10-08T19:51:15.488323675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:15.488557 containerd[1461]: time="2024-10-08T19:51:15.488527907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:15.488593 containerd[1461]: time="2024-10-08T19:51:15.488574476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:15.488613 containerd[1461]: time="2024-10-08T19:51:15.488601653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:15.488639 containerd[1461]: time="2024-10-08T19:51:15.488613208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:15.488733 containerd[1461]: time="2024-10-08T19:51:15.488716050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:15.489018 containerd[1461]: time="2024-10-08T19:51:15.488999248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:51:15.489153 containerd[1461]: time="2024-10-08T19:51:15.489133464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:51:15.489176 containerd[1461]: time="2024-10-08T19:51:15.489151443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:51:15.489295 containerd[1461]: time="2024-10-08T19:51:15.489278469Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:51:15.489363 containerd[1461]: time="2024-10-08T19:51:15.489345612Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:51:15.495648 containerd[1461]: time="2024-10-08T19:51:15.495611918Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:51:15.495704 containerd[1461]: time="2024-10-08T19:51:15.495680711Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:51:15.495726 containerd[1461]: time="2024-10-08T19:51:15.495701802Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:51:15.495726 containerd[1461]: time="2024-10-08T19:51:15.495717752Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:51:15.495786 containerd[1461]: time="2024-10-08T19:51:15.495733654Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:51:15.496002 containerd[1461]: time="2024-10-08T19:51:15.495971507Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:51:15.496343 containerd[1461]: time="2024-10-08T19:51:15.496304893Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:51:15.496602 containerd[1461]: time="2024-10-08T19:51:15.496560358Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:51:15.496602 containerd[1461]: time="2024-10-08T19:51:15.496584840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:51:15.496602 containerd[1461]: time="2024-10-08T19:51:15.496606101Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:51:15.496602 containerd[1461]: time="2024-10-08T19:51:15.496622170Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496639682Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496655562Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496674218Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496690188Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496705970Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496718966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496734032Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496758752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496773103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496794 containerd[1461]: time="2024-10-08T19:51:15.496785642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496803183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496817353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496830957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496843020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496857239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496869759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496883363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496895435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496907169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496939328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496955279Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496980894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.496989 containerd[1461]: time="2024-10-08T19:51:15.496993264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.497226 containerd[1461]: time="2024-10-08T19:51:15.497006361Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:51:15.498094 containerd[1461]: time="2024-10-08T19:51:15.498065340Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:51:15.498117 containerd[1461]: time="2024-10-08T19:51:15.498097072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:51:15.498117 containerd[1461]: time="2024-10-08T19:51:15.498110606Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:51:15.498170 containerd[1461]: time="2024-10-08T19:51:15.498123782Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:51:15.498170 containerd[1461]: time="2024-10-08T19:51:15.498138241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.498170 containerd[1461]: time="2024-10-08T19:51:15.498153555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:51:15.498234 containerd[1461]: time="2024-10-08T19:51:15.498181150Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:51:15.498234 containerd[1461]: time="2024-10-08T19:51:15.498193321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:51:15.498604 containerd[1461]: time="2024-10-08T19:51:15.498510270Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:51:15.498604 containerd[1461]: time="2024-10-08T19:51:15.498614901Z" level=info msg="Connect containerd service" Oct 8 19:51:15.498862 containerd[1461]: time="2024-10-08T19:51:15.498662097Z" level=info msg="using legacy CRI server" Oct 8 19:51:15.498862 containerd[1461]: time="2024-10-08T19:51:15.498672050Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:51:15.498862 containerd[1461]: time="2024-10-08T19:51:15.498797645Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:51:15.499520 containerd[1461]: time="2024-10-08T19:51:15.499485459Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:51:15.499716 containerd[1461]: time="2024-10-08T19:51:15.499622301Z" level=info msg="Start subscribing containerd event" Oct 8 19:51:15.499856 containerd[1461]: time="2024-10-08T19:51:15.499786757Z" level=info msg="Start recovering state" Oct 8 19:51:15.499898 containerd[1461]: time="2024-10-08T19:51:15.499872823Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:51:15.499992 containerd[1461]: time="2024-10-08T19:51:15.499967988Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:51:15.500764 containerd[1461]: time="2024-10-08T19:51:15.500730195Z" level=info msg="Start event monitor" Oct 8 19:51:15.501184 containerd[1461]: time="2024-10-08T19:51:15.500812204Z" level=info msg="Start snapshots syncer" Oct 8 19:51:15.501321 containerd[1461]: time="2024-10-08T19:51:15.501268430Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:51:15.501321 containerd[1461]: time="2024-10-08T19:51:15.501283843Z" level=info msg="Start streaming server" Oct 8 19:51:15.505124 containerd[1461]: time="2024-10-08T19:51:15.503531045Z" level=info msg="containerd successfully booted in 0.051362s" Oct 8 19:51:15.503836 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:51:15.589118 tar[1459]: linux-amd64/LICENSE Oct 8 19:51:15.589236 tar[1459]: linux-amd64/README.md Oct 8 19:51:15.605513 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:51:15.740468 systemd-networkd[1394]: eth0: Gained IPv6LL Oct 8 19:51:15.744823 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:51:15.747158 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:51:15.764433 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:51:15.767644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:15.770586 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:51:15.806826 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:51:15.809113 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:51:15.809406 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:51:15.814035 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:51:17.206143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:17.208196 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:51:17.210189 systemd[1]: Startup finished in 1.170s (kernel) + 6.437s (initrd) + 5.102s (userspace) = 12.710s. Oct 8 19:51:17.231253 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:18.074500 kubelet[1545]: E1008 19:51:18.074403 1545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:18.079344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:18.079554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:18.079950 systemd[1]: kubelet.service: Consumed 2.206s CPU time. Oct 8 19:51:24.564771 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:51:24.566144 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:52440.service - OpenSSH per-connection server daemon (10.0.0.1:52440). Oct 8 19:51:24.615598 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 52440 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:24.617696 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:24.630626 systemd-logind[1446]: New session 1 of user core. Oct 8 19:51:24.632218 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:51:24.649433 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:51:24.664505 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:51:24.674617 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:51:24.678638 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:51:24.811841 systemd[1564]: Queued start job for default target default.target. Oct 8 19:51:24.824596 systemd[1564]: Created slice app.slice - User Application Slice. Oct 8 19:51:24.824629 systemd[1564]: Reached target paths.target - Paths. Oct 8 19:51:24.824643 systemd[1564]: Reached target timers.target - Timers. Oct 8 19:51:24.826539 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:51:24.839832 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:51:24.840011 systemd[1564]: Reached target sockets.target - Sockets. Oct 8 19:51:24.840032 systemd[1564]: Reached target basic.target - Basic System. Oct 8 19:51:24.840079 systemd[1564]: Reached target default.target - Main User Target. Oct 8 19:51:24.840122 systemd[1564]: Startup finished in 153ms. Oct 8 19:51:24.840609 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:51:24.842699 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:51:24.906959 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:52442.service - OpenSSH per-connection server daemon (10.0.0.1:52442). Oct 8 19:51:24.948222 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 52442 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:24.950222 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:24.954549 systemd-logind[1446]: New session 2 of user core. Oct 8 19:51:24.968093 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:51:25.024314 sshd[1575]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:25.037993 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:52442.service: Deactivated successfully. Oct 8 19:51:25.040641 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:51:25.042763 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:51:25.052268 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:52450.service - OpenSSH per-connection server daemon (10.0.0.1:52450). Oct 8 19:51:25.053689 systemd-logind[1446]: Removed session 2. Oct 8 19:51:25.087118 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 52450 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:25.089107 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:25.093842 systemd-logind[1446]: New session 3 of user core. Oct 8 19:51:25.112192 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:51:25.163525 sshd[1582]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:25.177688 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:52450.service: Deactivated successfully. Oct 8 19:51:25.179237 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:51:25.180929 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:51:25.189147 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:52454.service - OpenSSH per-connection server daemon (10.0.0.1:52454). Oct 8 19:51:25.189982 systemd-logind[1446]: Removed session 3. Oct 8 19:51:25.220487 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 52454 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:25.221891 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:25.226019 systemd-logind[1446]: New session 4 of user core. Oct 8 19:51:25.236050 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:51:25.291925 sshd[1589]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:25.302692 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:52454.service: Deactivated successfully. Oct 8 19:51:25.304602 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:51:25.306418 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:51:25.321268 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:52468.service - OpenSSH per-connection server daemon (10.0.0.1:52468). Oct 8 19:51:25.322666 systemd-logind[1446]: Removed session 4. Oct 8 19:51:25.354680 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 52468 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:25.356525 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:25.361006 systemd-logind[1446]: New session 5 of user core. Oct 8 19:51:25.371072 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:51:25.430413 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:51:25.430781 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:25.444055 sudo[1600]: pam_unix(sudo:session): session closed for user root Oct 8 19:51:25.446229 sshd[1597]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:25.455394 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:52468.service: Deactivated successfully. Oct 8 19:51:25.457396 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:51:25.459223 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:51:25.460694 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:52482.service - OpenSSH per-connection server daemon (10.0.0.1:52482). Oct 8 19:51:25.461563 systemd-logind[1446]: Removed session 5. Oct 8 19:51:25.499969 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 52482 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:25.501705 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:25.506127 systemd-logind[1446]: New session 6 of user core. Oct 8 19:51:25.518245 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:51:25.574098 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:51:25.574517 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:25.579045 sudo[1609]: pam_unix(sudo:session): session closed for user root Oct 8 19:51:25.586636 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:51:25.587105 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:25.611329 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:51:25.613193 auditctl[1612]: No rules Oct 8 19:51:25.614966 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:51:25.615321 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:51:25.617634 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:51:25.651384 augenrules[1630]: No rules Oct 8 19:51:25.653384 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:51:25.654720 sudo[1608]: pam_unix(sudo:session): session closed for user root Oct 8 19:51:25.656686 sshd[1605]: pam_unix(sshd:session): session closed for user core Oct 8 19:51:25.673593 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:52482.service: Deactivated successfully. Oct 8 19:51:25.675789 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:51:25.677474 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:51:25.684319 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:52498.service - OpenSSH per-connection server daemon (10.0.0.1:52498). Oct 8 19:51:25.685273 systemd-logind[1446]: Removed session 6. Oct 8 19:51:25.717148 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 52498 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:51:25.718906 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:51:25.723330 systemd-logind[1446]: New session 7 of user core. Oct 8 19:51:25.739187 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:51:25.794534 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:51:25.794980 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:51:26.488342 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:51:26.488604 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:51:27.453773 dockerd[1660]: time="2024-10-08T19:51:27.453666023Z" level=info msg="Starting up" Oct 8 19:51:27.966230 dockerd[1660]: time="2024-10-08T19:51:27.966084533Z" level=info msg="Loading containers: start." Oct 8 19:51:28.171977 kernel: Initializing XFRM netlink socket Oct 8 19:51:28.214178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:51:28.221181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:28.269837 systemd-networkd[1394]: docker0: Link UP Oct 8 19:51:28.445266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:28.451170 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:28.719465 dockerd[1660]: time="2024-10-08T19:51:28.719277116Z" level=info msg="Loading containers: done." Oct 8 19:51:28.741692 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4125882703-merged.mount: Deactivated successfully. Oct 8 19:51:28.747870 kubelet[1769]: E1008 19:51:28.747781 1769 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:28.755924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:28.756127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:28.804204 dockerd[1660]: time="2024-10-08T19:51:28.804097252Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:51:28.804398 dockerd[1660]: time="2024-10-08T19:51:28.804311452Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:51:28.804647 dockerd[1660]: time="2024-10-08T19:51:28.804494055Z" level=info msg="Daemon has completed initialization" Oct 8 19:51:29.616632 dockerd[1660]: time="2024-10-08T19:51:29.616496910Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:51:29.617197 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:51:31.130382 containerd[1461]: time="2024-10-08T19:51:31.130326524Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 19:51:32.048648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379685652.mount: Deactivated successfully. Oct 8 19:51:35.844214 containerd[1461]: time="2024-10-08T19:51:35.844125832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:35.848895 containerd[1461]: time="2024-10-08T19:51:35.848788124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 8 19:51:35.852141 containerd[1461]: time="2024-10-08T19:51:35.852042731Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:35.856837 containerd[1461]: time="2024-10-08T19:51:35.856728535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:35.858585 containerd[1461]: time="2024-10-08T19:51:35.858505592Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 4.728131146s" Oct 8 19:51:35.858668 containerd[1461]: time="2024-10-08T19:51:35.858592480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 8 19:51:35.895486 containerd[1461]: time="2024-10-08T19:51:35.895421476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 19:51:38.777954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:51:38.791198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:38.957012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:38.962133 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:39.037288 kubelet[1903]: E1008 19:51:39.037090 1903 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:39.041839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:39.042087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:41.118006 containerd[1461]: time="2024-10-08T19:51:41.117935396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:41.120874 containerd[1461]: time="2024-10-08T19:51:41.120808410Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 8 19:51:41.122493 containerd[1461]: time="2024-10-08T19:51:41.122465014Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:41.125491 containerd[1461]: time="2024-10-08T19:51:41.125427174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:41.126491 containerd[1461]: time="2024-10-08T19:51:41.126450777Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 5.230975654s" Oct 8 19:51:41.126552 containerd[1461]: time="2024-10-08T19:51:41.126491965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 8 19:51:41.170849 containerd[1461]: time="2024-10-08T19:51:41.170790112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 19:51:43.772999 containerd[1461]: time="2024-10-08T19:51:43.772879071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:43.797965 containerd[1461]: time="2024-10-08T19:51:43.797792223Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 8 19:51:43.848741 containerd[1461]: time="2024-10-08T19:51:43.848645232Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:43.896337 containerd[1461]: time="2024-10-08T19:51:43.896242382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:43.897830 containerd[1461]: time="2024-10-08T19:51:43.897773062Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 2.726930373s" Oct 8 19:51:43.897830 containerd[1461]: time="2024-10-08T19:51:43.897823918Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 8 19:51:43.929091 containerd[1461]: time="2024-10-08T19:51:43.929039195Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 19:51:46.250563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910149467.mount: Deactivated successfully. Oct 8 19:51:47.928552 containerd[1461]: time="2024-10-08T19:51:47.928428578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:47.935384 containerd[1461]: time="2024-10-08T19:51:47.935267788Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 8 19:51:47.940516 containerd[1461]: time="2024-10-08T19:51:47.940422560Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:47.944605 containerd[1461]: time="2024-10-08T19:51:47.944427173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:47.945239 containerd[1461]: time="2024-10-08T19:51:47.945182301Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 4.016091085s" Oct 8 19:51:47.945239 containerd[1461]: time="2024-10-08T19:51:47.945231695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 8 19:51:47.981120 containerd[1461]: time="2024-10-08T19:51:47.981034390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:51:49.277907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:51:49.287118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:51:49.475120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:51:49.481453 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:51:49.786191 kubelet[1955]: E1008 19:51:49.786117 1955 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:51:49.791274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:51:49.791515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:51:51.817033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2348378145.mount: Deactivated successfully. Oct 8 19:51:59.267510 containerd[1461]: time="2024-10-08T19:51:59.267374742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:59.398383 containerd[1461]: time="2024-10-08T19:51:59.398250077Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:51:59.511255 containerd[1461]: time="2024-10-08T19:51:59.511173683Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:59.635797 containerd[1461]: time="2024-10-08T19:51:59.635710566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:51:59.637476 containerd[1461]: time="2024-10-08T19:51:59.637394740Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 11.656301718s" Oct 8 19:51:59.637551 containerd[1461]: time="2024-10-08T19:51:59.637481403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:51:59.661878 containerd[1461]: time="2024-10-08T19:51:59.661817732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:52:00.027802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 19:52:00.038132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:00.196286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:00.202020 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:00.292742 kubelet[2022]: E1008 19:52:00.292548 2022 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:00.297573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:00.297814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:00.501086 update_engine[1450]: I20241008 19:52:00.500972 1450 update_attempter.cc:509] Updating boot flags... Oct 8 19:52:00.732967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2038) Oct 8 19:52:00.778960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2038) Oct 8 19:52:00.817971 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2038) Oct 8 19:52:02.058399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379488415.mount: Deactivated successfully. Oct 8 19:52:02.070174 containerd[1461]: time="2024-10-08T19:52:02.070079135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:02.072798 containerd[1461]: time="2024-10-08T19:52:02.072670187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 19:52:02.074610 containerd[1461]: time="2024-10-08T19:52:02.074580033Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:02.077982 containerd[1461]: time="2024-10-08T19:52:02.077861008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:02.079493 containerd[1461]: time="2024-10-08T19:52:02.079420713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.417543438s" Oct 8 19:52:02.079493 containerd[1461]: time="2024-10-08T19:52:02.079481717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 19:52:02.115548 containerd[1461]: time="2024-10-08T19:52:02.115229470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 19:52:04.922579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921453863.mount: Deactivated successfully. Oct 8 19:52:10.527856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 19:52:10.540336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:10.706572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:10.712539 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:10.899438 kubelet[2078]: E1008 19:52:10.899282 2078 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:10.903612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:10.903864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:21.027858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 19:52:21.038188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:21.189576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:21.194661 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:52:21.261664 kubelet[2132]: E1008 19:52:21.261544 2132 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:52:21.266024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:52:21.266228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:52:25.590477 containerd[1461]: time="2024-10-08T19:52:25.590370394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:25.699033 containerd[1461]: time="2024-10-08T19:52:25.698896656Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 8 19:52:25.740870 containerd[1461]: time="2024-10-08T19:52:25.740782798Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:25.800242 containerd[1461]: time="2024-10-08T19:52:25.800150770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:25.801788 containerd[1461]: time="2024-10-08T19:52:25.801707878Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 23.686419517s" Oct 8 19:52:25.801788 containerd[1461]: time="2024-10-08T19:52:25.801774013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 8 19:52:28.474284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:28.485332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:28.507674 systemd[1]: Reloading requested from client PID 2219 ('systemctl') (unit session-7.scope)... Oct 8 19:52:28.507696 systemd[1]: Reloading... Oct 8 19:52:28.587035 zram_generator::config[2258]: No configuration found. Oct 8 19:52:29.121691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:52:29.203638 systemd[1]: Reloading finished in 695 ms. Oct 8 19:52:29.264748 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:29.267641 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:52:29.267976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:29.270285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:29.443578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:29.448879 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:52:29.493024 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:29.493024 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:52:29.493024 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:29.511060 kubelet[2308]: I1008 19:52:29.510940 2308 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:52:29.776677 kubelet[2308]: I1008 19:52:29.776540 2308 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:52:29.776677 kubelet[2308]: I1008 19:52:29.776577 2308 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:52:29.776825 kubelet[2308]: I1008 19:52:29.776807 2308 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:52:29.829492 kubelet[2308]: I1008 19:52:29.829435 2308 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:52:29.847136 kubelet[2308]: E1008 19:52:29.847110 2308 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.884828 kubelet[2308]: I1008 19:52:29.884770 2308 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:52:29.898471 kubelet[2308]: I1008 19:52:29.898360 2308 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:52:29.898682 kubelet[2308]: I1008 19:52:29.898432 2308 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:52:29.902969 kubelet[2308]: I1008 19:52:29.902877 2308 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:52:29.902969 kubelet[2308]: I1008 19:52:29.902908 2308 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:52:29.903199 kubelet[2308]: I1008 19:52:29.903130 2308 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:29.910418 kubelet[2308]: I1008 19:52:29.910353 2308 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:52:29.910418 kubelet[2308]: I1008 19:52:29.910397 2308 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:52:29.910619 kubelet[2308]: I1008 19:52:29.910453 2308 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:52:29.910619 kubelet[2308]: I1008 19:52:29.910484 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:52:29.911216 kubelet[2308]: W1008 19:52:29.911125 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.911216 kubelet[2308]: W1008 19:52:29.911158 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.911216 kubelet[2308]: E1008 19:52:29.911232 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.911216 kubelet[2308]: E1008 19:52:29.911239 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.932648 kubelet[2308]: I1008 19:52:29.932581 2308 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:52:29.947499 kubelet[2308]: I1008 19:52:29.947414 2308 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:52:29.947669 kubelet[2308]: W1008 19:52:29.947558 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:52:29.948692 kubelet[2308]: I1008 19:52:29.948543 2308 server.go:1264] "Started kubelet" Oct 8 19:52:29.949154 kubelet[2308]: I1008 19:52:29.949061 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:52:29.949590 kubelet[2308]: I1008 19:52:29.949569 2308 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:52:29.949683 kubelet[2308]: I1008 19:52:29.949629 2308 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:52:29.950285 kubelet[2308]: I1008 19:52:29.950235 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:52:29.950858 kubelet[2308]: I1008 19:52:29.950804 2308 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:52:29.954789 kubelet[2308]: E1008 19:52:29.954097 2308 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:52:29.954789 kubelet[2308]: I1008 19:52:29.954153 2308 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:52:29.954789 kubelet[2308]: I1008 19:52:29.954295 2308 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:52:29.954789 kubelet[2308]: I1008 19:52:29.954374 2308 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:52:29.954789 kubelet[2308]: E1008 19:52:29.954740 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Oct 8 19:52:29.955112 kubelet[2308]: W1008 19:52:29.954894 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.955112 kubelet[2308]: E1008 19:52:29.954964 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.955638 kubelet[2308]: E1008 19:52:29.955608 2308 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:52:29.955703 kubelet[2308]: I1008 19:52:29.955688 2308 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:52:29.955786 kubelet[2308]: I1008 19:52:29.955763 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:52:29.956716 kubelet[2308]: I1008 19:52:29.956684 2308 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:52:29.980208 kubelet[2308]: E1008 19:52:29.980035 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc923d865fb1db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:29.948506587 +0000 UTC m=+0.495220088,LastTimestamp:2024-10-08 19:52:29.948506587 +0000 UTC m=+0.495220088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:29.980413 kubelet[2308]: I1008 19:52:29.980300 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:52:29.982812 kubelet[2308]: I1008 19:52:29.982211 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:52:29.982812 kubelet[2308]: I1008 19:52:29.982295 2308 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:52:29.982812 kubelet[2308]: I1008 19:52:29.982337 2308 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:52:29.982812 kubelet[2308]: E1008 19:52:29.982411 2308 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:52:29.986157 kubelet[2308]: W1008 19:52:29.986107 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.986157 kubelet[2308]: E1008 19:52:29.986153 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:29.986490 kubelet[2308]: I1008 19:52:29.986463 2308 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:52:29.986490 kubelet[2308]: I1008 19:52:29.986482 2308 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:52:29.986568 kubelet[2308]: I1008 19:52:29.986504 2308 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:30.055799 kubelet[2308]: I1008 19:52:30.055750 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:52:30.056292 kubelet[2308]: E1008 19:52:30.056240 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Oct 8 19:52:30.083505 kubelet[2308]: E1008 19:52:30.083383 2308 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:52:30.155487 kubelet[2308]: E1008 19:52:30.155408 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Oct 8 19:52:30.258113 kubelet[2308]: I1008 19:52:30.258071 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:52:30.258599 kubelet[2308]: E1008 19:52:30.258540 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Oct 8 19:52:30.283701 kubelet[2308]: E1008 19:52:30.283627 2308 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:52:30.556345 kubelet[2308]: E1008 19:52:30.556283 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Oct 8 19:52:30.660501 kubelet[2308]: I1008 19:52:30.660466 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:52:30.660908 kubelet[2308]: E1008 19:52:30.660867 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Oct 8 19:52:30.684215 kubelet[2308]: E1008 19:52:30.684101 2308 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:52:30.847808 kubelet[2308]: W1008 19:52:30.847616 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:30.847808 kubelet[2308]: E1008 19:52:30.847686 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:31.074211 kubelet[2308]: W1008 19:52:31.074136 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:31.074211 kubelet[2308]: E1008 19:52:31.074184 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:31.190634 kubelet[2308]: W1008 19:52:31.190437 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:31.190634 kubelet[2308]: E1008 19:52:31.190529 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:31.357404 kubelet[2308]: E1008 19:52:31.357318 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Oct 8 19:52:31.462857 kubelet[2308]: I1008 19:52:31.462722 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:52:31.463311 kubelet[2308]: E1008 19:52:31.463257 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Oct 8 19:52:31.484396 kubelet[2308]: E1008 19:52:31.484333 2308 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:52:31.506345 kubelet[2308]: W1008 19:52:31.506261 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:31.506345 kubelet[2308]: E1008 19:52:31.506342 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:31.839552 kubelet[2308]: I1008 19:52:31.839404 2308 policy_none.go:49] "None policy: Start" Oct 8 19:52:31.840620 kubelet[2308]: I1008 19:52:31.840580 2308 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:52:31.840674 kubelet[2308]: I1008 19:52:31.840628 2308 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:52:31.931451 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:52:31.944098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:52:31.947244 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:52:31.956908 kubelet[2308]: I1008 19:52:31.956835 2308 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:52:31.957376 kubelet[2308]: I1008 19:52:31.957139 2308 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:52:31.957376 kubelet[2308]: I1008 19:52:31.957315 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:52:31.958284 kubelet[2308]: E1008 19:52:31.958248 2308 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:52:31.995784 kubelet[2308]: E1008 19:52:31.995720 2308 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:32.619148 kubelet[2308]: W1008 19:52:32.619071 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:32.619148 kubelet[2308]: E1008 19:52:32.619129 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:32.958869 kubelet[2308]: E1008 19:52:32.958685 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="3.2s" Oct 8 19:52:33.065711 kubelet[2308]: I1008 19:52:33.065663 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:52:33.066136 kubelet[2308]: E1008 19:52:33.066091 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Oct 8 19:52:33.085664 kubelet[2308]: I1008 19:52:33.085555 2308 topology_manager.go:215] "Topology Admit Handler" podUID="f88fe9a6a793d298e6f29b1c834ac17e" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:52:33.087133 kubelet[2308]: I1008 19:52:33.087081 2308 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:52:33.088241 kubelet[2308]: I1008 19:52:33.088173 2308 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:52:33.095466 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 8 19:52:33.107137 systemd[1]: Created slice kubepods-burstable-podf88fe9a6a793d298e6f29b1c834ac17e.slice - libcontainer container kubepods-burstable-podf88fe9a6a793d298e6f29b1c834ac17e.slice. Oct 8 19:52:33.111221 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 8 19:52:33.174563 kubelet[2308]: I1008 19:52:33.174490 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:33.174563 kubelet[2308]: I1008 19:52:33.174536 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:33.174563 kubelet[2308]: I1008 19:52:33.174563 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:33.174563 kubelet[2308]: I1008 19:52:33.174583 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f88fe9a6a793d298e6f29b1c834ac17e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88fe9a6a793d298e6f29b1c834ac17e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:33.174870 kubelet[2308]: I1008 19:52:33.174600 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f88fe9a6a793d298e6f29b1c834ac17e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88fe9a6a793d298e6f29b1c834ac17e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:33.174870 kubelet[2308]: I1008 19:52:33.174684 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f88fe9a6a793d298e6f29b1c834ac17e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f88fe9a6a793d298e6f29b1c834ac17e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:33.174870 kubelet[2308]: I1008 19:52:33.174732 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:33.174870 kubelet[2308]: I1008 19:52:33.174763 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:33.174870 kubelet[2308]: I1008 19:52:33.174811 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:52:33.318977 kubelet[2308]: W1008 19:52:33.318882 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:33.318977 kubelet[2308]: E1008 19:52:33.318976 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:33.405310 kubelet[2308]: E1008 19:52:33.405221 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:33.406253 containerd[1461]: time="2024-10-08T19:52:33.406194239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:33.411493 kubelet[2308]: E1008 19:52:33.411425 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:33.412059 containerd[1461]: time="2024-10-08T19:52:33.412003930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f88fe9a6a793d298e6f29b1c834ac17e,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:33.414436 kubelet[2308]: E1008 19:52:33.414383 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:33.415018 containerd[1461]: time="2024-10-08T19:52:33.414960227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:33.954819 kubelet[2308]: W1008 19:52:33.954768 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:33.954819 kubelet[2308]: E1008 19:52:33.954816 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:34.242751 kubelet[2308]: W1008 19:52:34.242563 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:34.242751 kubelet[2308]: E1008 19:52:34.242640 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Oct 8 19:52:34.695471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109612700.mount: Deactivated successfully. Oct 8 19:52:34.706175 containerd[1461]: time="2024-10-08T19:52:34.706092714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:34.707265 containerd[1461]: time="2024-10-08T19:52:34.707206357Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:34.708278 containerd[1461]: time="2024-10-08T19:52:34.708191649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:52:34.709479 containerd[1461]: time="2024-10-08T19:52:34.709372119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:52:34.710447 containerd[1461]: time="2024-10-08T19:52:34.710395842Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:34.711787 containerd[1461]: time="2024-10-08T19:52:34.711728708Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:34.712735 containerd[1461]: time="2024-10-08T19:52:34.712662273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:52:34.715373 containerd[1461]: time="2024-10-08T19:52:34.715333434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:52:34.717569 containerd[1461]: time="2024-10-08T19:52:34.717498313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.305377875s" Oct 8 19:52:34.719372 containerd[1461]: time="2024-10-08T19:52:34.719309918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.312994803s" Oct 8 19:52:34.721789 containerd[1461]: time="2024-10-08T19:52:34.721741269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.306691794s" Oct 8 19:52:35.021641 containerd[1461]: time="2024-10-08T19:52:35.020938270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:35.021641 containerd[1461]: time="2024-10-08T19:52:35.021022899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:35.021641 containerd[1461]: time="2024-10-08T19:52:35.021039620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:35.021988 containerd[1461]: time="2024-10-08T19:52:35.021150319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:35.022635 containerd[1461]: time="2024-10-08T19:52:35.022513721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:35.022635 containerd[1461]: time="2024-10-08T19:52:35.022607607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:35.024609 containerd[1461]: time="2024-10-08T19:52:35.022622946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:35.024609 containerd[1461]: time="2024-10-08T19:52:35.022732922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:35.024722 containerd[1461]: time="2024-10-08T19:52:35.024474266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:35.024722 containerd[1461]: time="2024-10-08T19:52:35.024545750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:35.024722 containerd[1461]: time="2024-10-08T19:52:35.024560949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:35.028327 containerd[1461]: time="2024-10-08T19:52:35.028136380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:35.059132 systemd[1]: Started cri-containerd-39a519622e22cd58048570daa3df8432ca589debc74fd2830cd398e9946ee8c5.scope - libcontainer container 39a519622e22cd58048570daa3df8432ca589debc74fd2830cd398e9946ee8c5. Oct 8 19:52:35.063997 systemd[1]: Started cri-containerd-36e638b6b6c9dde2bd0b1fd2a9a38aaf3d7376eaf41a47e3b710f163c2f5a2d4.scope - libcontainer container 36e638b6b6c9dde2bd0b1fd2a9a38aaf3d7376eaf41a47e3b710f163c2f5a2d4. Oct 8 19:52:35.065968 systemd[1]: Started cri-containerd-a6d2897b83fc01b16d1a4d50dc48edbafc9c05a83a64194d837bb384c8346367.scope - libcontainer container a6d2897b83fc01b16d1a4d50dc48edbafc9c05a83a64194d837bb384c8346367. Oct 8 19:52:35.137865 containerd[1461]: time="2024-10-08T19:52:35.137730446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6d2897b83fc01b16d1a4d50dc48edbafc9c05a83a64194d837bb384c8346367\"" Oct 8 19:52:35.138433 containerd[1461]: time="2024-10-08T19:52:35.138272094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"39a519622e22cd58048570daa3df8432ca589debc74fd2830cd398e9946ee8c5\"" Oct 8 19:52:35.140038 kubelet[2308]: E1008 19:52:35.139344 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:35.140124 kubelet[2308]: E1008 19:52:35.140106 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:35.143877 containerd[1461]: time="2024-10-08T19:52:35.143816135Z" level=info msg="CreateContainer within sandbox \"a6d2897b83fc01b16d1a4d50dc48edbafc9c05a83a64194d837bb384c8346367\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:52:35.145739 containerd[1461]: time="2024-10-08T19:52:35.145687813Z" level=info msg="CreateContainer within sandbox \"39a519622e22cd58048570daa3df8432ca589debc74fd2830cd398e9946ee8c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:52:35.160590 containerd[1461]: time="2024-10-08T19:52:35.160537245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f88fe9a6a793d298e6f29b1c834ac17e,Namespace:kube-system,Attempt:0,} returns sandbox id \"36e638b6b6c9dde2bd0b1fd2a9a38aaf3d7376eaf41a47e3b710f163c2f5a2d4\"" Oct 8 19:52:35.161368 kubelet[2308]: E1008 19:52:35.161343 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:35.163520 containerd[1461]: time="2024-10-08T19:52:35.163487940Z" level=info msg="CreateContainer within sandbox \"36e638b6b6c9dde2bd0b1fd2a9a38aaf3d7376eaf41a47e3b710f163c2f5a2d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:52:35.164039 containerd[1461]: time="2024-10-08T19:52:35.164012596Z" level=info msg="CreateContainer within sandbox \"a6d2897b83fc01b16d1a4d50dc48edbafc9c05a83a64194d837bb384c8346367\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc882804c24563f8368685d02787508cbd4b07a6f13fd76160ac9dd695b70c9e\"" Oct 8 19:52:35.164453 containerd[1461]: time="2024-10-08T19:52:35.164432936Z" level=info msg="StartContainer for \"cc882804c24563f8368685d02787508cbd4b07a6f13fd76160ac9dd695b70c9e\"" Oct 8 19:52:35.177515 containerd[1461]: time="2024-10-08T19:52:35.177443682Z" level=info msg="CreateContainer within sandbox \"39a519622e22cd58048570daa3df8432ca589debc74fd2830cd398e9946ee8c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97e8f58317e03c9018dab7ca74386a4d002ba477e465b6b151cc10932c7f0d01\"" Oct 8 19:52:35.178300 containerd[1461]: time="2024-10-08T19:52:35.178258884Z" level=info msg="StartContainer for \"97e8f58317e03c9018dab7ca74386a4d002ba477e465b6b151cc10932c7f0d01\"" Oct 8 19:52:35.188030 containerd[1461]: time="2024-10-08T19:52:35.187944710Z" level=info msg="CreateContainer within sandbox \"36e638b6b6c9dde2bd0b1fd2a9a38aaf3d7376eaf41a47e3b710f163c2f5a2d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f4c13e8c40f8eb0c03958d5a048f024032657c1a77c75beef087e1c4825d50d\"" Oct 8 19:52:35.189596 containerd[1461]: time="2024-10-08T19:52:35.189404544Z" level=info msg="StartContainer for \"8f4c13e8c40f8eb0c03958d5a048f024032657c1a77c75beef087e1c4825d50d\"" Oct 8 19:52:35.194227 systemd[1]: Started cri-containerd-cc882804c24563f8368685d02787508cbd4b07a6f13fd76160ac9dd695b70c9e.scope - libcontainer container cc882804c24563f8368685d02787508cbd4b07a6f13fd76160ac9dd695b70c9e. Oct 8 19:52:35.217215 systemd[1]: Started cri-containerd-97e8f58317e03c9018dab7ca74386a4d002ba477e465b6b151cc10932c7f0d01.scope - libcontainer container 97e8f58317e03c9018dab7ca74386a4d002ba477e465b6b151cc10932c7f0d01. Oct 8 19:52:35.239290 systemd[1]: Started cri-containerd-8f4c13e8c40f8eb0c03958d5a048f024032657c1a77c75beef087e1c4825d50d.scope - libcontainer container 8f4c13e8c40f8eb0c03958d5a048f024032657c1a77c75beef087e1c4825d50d. Oct 8 19:52:35.267625 containerd[1461]: time="2024-10-08T19:52:35.267406328Z" level=info msg="StartContainer for \"cc882804c24563f8368685d02787508cbd4b07a6f13fd76160ac9dd695b70c9e\" returns successfully" Oct 8 19:52:35.326889 containerd[1461]: time="2024-10-08T19:52:35.326845172Z" level=info msg="StartContainer for \"8f4c13e8c40f8eb0c03958d5a048f024032657c1a77c75beef087e1c4825d50d\" returns successfully" Oct 8 19:52:35.341295 containerd[1461]: time="2024-10-08T19:52:35.341230571Z" level=info msg="StartContainer for \"97e8f58317e03c9018dab7ca74386a4d002ba477e465b6b151cc10932c7f0d01\" returns successfully" Oct 8 19:52:36.010697 kubelet[2308]: E1008 19:52:36.010648 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:36.016656 kubelet[2308]: E1008 19:52:36.016616 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:36.018443 kubelet[2308]: E1008 19:52:36.017277 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:36.268558 kubelet[2308]: I1008 19:52:36.268417 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:52:37.018406 kubelet[2308]: E1008 19:52:37.018363 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:37.073939 kubelet[2308]: E1008 19:52:37.073617 2308 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc923d865fb1db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:29.948506587 +0000 UTC m=+0.495220088,LastTimestamp:2024-10-08 19:52:29.948506587 +0000 UTC m=+0.495220088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:37.076102 kubelet[2308]: I1008 19:52:37.076069 2308 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:52:37.188018 kubelet[2308]: E1008 19:52:37.187214 2308 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc923d86cbd83f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:29.955594303 +0000 UTC m=+0.502307794,LastTimestamp:2024-10-08 19:52:29.955594303 +0000 UTC m=+0.502307794,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:37.188960 kubelet[2308]: E1008 19:52:37.188942 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Oct 8 19:52:37.255483 kubelet[2308]: E1008 19:52:37.255346 2308 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc923d88926642 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:52:29.985384002 +0000 UTC m=+0.532097503,LastTimestamp:2024-10-08 19:52:29.985384002 +0000 UTC m=+0.532097503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:52:37.917265 kubelet[2308]: I1008 19:52:37.917197 2308 apiserver.go:52] "Watching apiserver" Oct 8 19:52:37.955144 kubelet[2308]: I1008 19:52:37.955063 2308 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:52:38.725866 kubelet[2308]: E1008 19:52:38.725816 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:39.020389 kubelet[2308]: E1008 19:52:39.020063 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:39.412645 systemd[1]: Reloading requested from client PID 2584 ('systemctl') (unit session-7.scope)... Oct 8 19:52:39.412661 systemd[1]: Reloading... Oct 8 19:52:39.501959 zram_generator::config[2626]: No configuration found. Oct 8 19:52:39.613819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:52:39.713421 systemd[1]: Reloading finished in 300 ms. Oct 8 19:52:39.764303 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:39.776460 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:52:39.776728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:39.788265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:52:39.935456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:52:39.941295 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:52:39.991540 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:39.991540 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:52:39.991540 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:52:39.991540 kubelet[2668]: I1008 19:52:39.991504 2668 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:52:39.996596 kubelet[2668]: I1008 19:52:39.996546 2668 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:52:39.996596 kubelet[2668]: I1008 19:52:39.996578 2668 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:52:39.996879 kubelet[2668]: I1008 19:52:39.996853 2668 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:52:39.998194 kubelet[2668]: I1008 19:52:39.998169 2668 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:52:39.999371 kubelet[2668]: I1008 19:52:39.999334 2668 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:52:40.006705 kubelet[2668]: I1008 19:52:40.006668 2668 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:52:40.006934 kubelet[2668]: I1008 19:52:40.006880 2668 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:52:40.007088 kubelet[2668]: I1008 19:52:40.006910 2668 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:52:40.007171 kubelet[2668]: I1008 19:52:40.007099 2668 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:52:40.007171 kubelet[2668]: I1008 19:52:40.007108 2668 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:52:40.007171 kubelet[2668]: I1008 19:52:40.007153 2668 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:40.007277 kubelet[2668]: I1008 19:52:40.007265 2668 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:52:40.007277 kubelet[2668]: I1008 19:52:40.007278 2668 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:52:40.007346 kubelet[2668]: I1008 19:52:40.007299 2668 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:52:40.007346 kubelet[2668]: I1008 19:52:40.007317 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:52:40.008321 kubelet[2668]: I1008 19:52:40.008295 2668 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:52:40.008547 kubelet[2668]: I1008 19:52:40.008510 2668 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:52:40.009087 kubelet[2668]: I1008 19:52:40.008973 2668 server.go:1264] "Started kubelet" Oct 8 19:52:40.012416 kubelet[2668]: I1008 19:52:40.012395 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:52:40.012523 kubelet[2668]: I1008 19:52:40.012485 2668 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:52:40.012671 kubelet[2668]: I1008 19:52:40.012567 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:52:40.014312 kubelet[2668]: I1008 19:52:40.013294 2668 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:52:40.014610 kubelet[2668]: I1008 19:52:40.014578 2668 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:52:40.016556 kubelet[2668]: I1008 19:52:40.016061 2668 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:52:40.016824 kubelet[2668]: I1008 19:52:40.016777 2668 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:52:40.016991 kubelet[2668]: I1008 19:52:40.016966 2668 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:52:40.018110 kubelet[2668]: I1008 19:52:40.018090 2668 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:52:40.018338 kubelet[2668]: I1008 19:52:40.018315 2668 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:52:40.021593 kubelet[2668]: E1008 19:52:40.021567 2668 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:52:40.021875 kubelet[2668]: I1008 19:52:40.021857 2668 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:52:40.035391 kubelet[2668]: I1008 19:52:40.035350 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:52:40.038037 kubelet[2668]: I1008 19:52:40.038007 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:52:40.038154 kubelet[2668]: I1008 19:52:40.038055 2668 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:52:40.038154 kubelet[2668]: I1008 19:52:40.038074 2668 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:52:40.038154 kubelet[2668]: E1008 19:52:40.038116 2668 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:52:40.064599 kubelet[2668]: I1008 19:52:40.064562 2668 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:52:40.064599 kubelet[2668]: I1008 19:52:40.064583 2668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:52:40.064599 kubelet[2668]: I1008 19:52:40.064607 2668 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:52:40.065133 kubelet[2668]: I1008 19:52:40.064865 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:52:40.065133 kubelet[2668]: I1008 19:52:40.064883 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:52:40.065133 kubelet[2668]: I1008 19:52:40.064908 2668 policy_none.go:49] "None policy: Start" Oct 8 19:52:40.065663 kubelet[2668]: I1008 19:52:40.065622 2668 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:52:40.065663 kubelet[2668]: I1008 19:52:40.065646 2668 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:52:40.065809 kubelet[2668]: I1008 19:52:40.065791 2668 state_mem.go:75] "Updated machine memory state" Oct 8 19:52:40.072067 kubelet[2668]: I1008 19:52:40.072041 2668 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:52:40.072311 kubelet[2668]: I1008 19:52:40.072248 2668 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:52:40.072488 kubelet[2668]: I1008 19:52:40.072379 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:52:40.122857 kubelet[2668]: I1008 19:52:40.122809 2668 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:52:40.138958 kubelet[2668]: I1008 19:52:40.138908 2668 topology_manager.go:215] "Topology Admit Handler" podUID="f88fe9a6a793d298e6f29b1c834ac17e" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:52:40.139050 kubelet[2668]: I1008 19:52:40.139025 2668 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:52:40.139101 kubelet[2668]: I1008 19:52:40.139088 2668 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:52:40.220730 kubelet[2668]: E1008 19:52:40.220676 2668 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:40.221858 kubelet[2668]: I1008 19:52:40.221815 2668 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:52:40.221935 kubelet[2668]: I1008 19:52:40.221903 2668 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:52:40.318371 kubelet[2668]: I1008 19:52:40.318290 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f88fe9a6a793d298e6f29b1c834ac17e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f88fe9a6a793d298e6f29b1c834ac17e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:40.318371 kubelet[2668]: I1008 19:52:40.318351 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:40.318371 kubelet[2668]: I1008 19:52:40.318377 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:52:40.318589 kubelet[2668]: I1008 19:52:40.318395 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f88fe9a6a793d298e6f29b1c834ac17e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88fe9a6a793d298e6f29b1c834ac17e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:40.318589 kubelet[2668]: I1008 19:52:40.318414 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:40.318589 kubelet[2668]: I1008 19:52:40.318431 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:40.318589 kubelet[2668]: I1008 19:52:40.318446 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:40.318589 kubelet[2668]: I1008 19:52:40.318531 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:40.318706 kubelet[2668]: I1008 19:52:40.318606 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f88fe9a6a793d298e6f29b1c834ac17e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88fe9a6a793d298e6f29b1c834ac17e\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:40.522125 kubelet[2668]: E1008 19:52:40.521849 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:40.522125 kubelet[2668]: E1008 19:52:40.521849 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:40.522125 kubelet[2668]: E1008 19:52:40.522054 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:41.009359 kubelet[2668]: I1008 19:52:41.009245 2668 apiserver.go:52] "Watching apiserver" Oct 8 19:52:41.017984 kubelet[2668]: I1008 19:52:41.017880 2668 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:52:41.050702 kubelet[2668]: E1008 19:52:41.050656 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:41.233373 kubelet[2668]: E1008 19:52:41.233313 2668 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:52:41.233541 kubelet[2668]: E1008 19:52:41.233428 2668 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 8 19:52:41.233788 kubelet[2668]: E1008 19:52:41.233760 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:41.233849 kubelet[2668]: E1008 19:52:41.233818 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:41.693131 kubelet[2668]: I1008 19:52:41.692357 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6923264599999999 podStartE2EDuration="1.69232646s" podCreationTimestamp="2024-10-08 19:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:41.413146776 +0000 UTC m=+1.466288446" watchObservedRunningTime="2024-10-08 19:52:41.69232646 +0000 UTC m=+1.745468130" Oct 8 19:52:41.728195 kubelet[2668]: I1008 19:52:41.727947 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.727890794 podStartE2EDuration="3.727890794s" podCreationTimestamp="2024-10-08 19:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:41.697786019 +0000 UTC m=+1.750927699" watchObservedRunningTime="2024-10-08 19:52:41.727890794 +0000 UTC m=+1.781032464" Oct 8 19:52:42.052636 kubelet[2668]: E1008 19:52:42.052555 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.053158 kubelet[2668]: E1008 19:52:42.052776 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:42.053366 kubelet[2668]: E1008 19:52:42.053346 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:45.862140 sudo[1642]: pam_unix(sudo:session): session closed for user root Oct 8 19:52:45.864490 sshd[1638]: pam_unix(sshd:session): session closed for user core Oct 8 19:52:45.868996 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:52498.service: Deactivated successfully. Oct 8 19:52:45.871124 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:52:45.871343 systemd[1]: session-7.scope: Consumed 5.698s CPU time, 193.9M memory peak, 0B memory swap peak. Oct 8 19:52:45.871825 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:52:45.872842 systemd-logind[1446]: Removed session 7. Oct 8 19:52:47.316766 kubelet[2668]: E1008 19:52:47.316639 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:47.480465 kubelet[2668]: I1008 19:52:47.480289 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.480267976 podStartE2EDuration="7.480267976s" podCreationTimestamp="2024-10-08 19:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:41.728149721 +0000 UTC m=+1.781291391" watchObservedRunningTime="2024-10-08 19:52:47.480267976 +0000 UTC m=+7.533409646" Oct 8 19:52:48.062972 kubelet[2668]: E1008 19:52:48.062897 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:48.104252 kubelet[2668]: E1008 19:52:48.104202 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:49.063583 kubelet[2668]: E1008 19:52:49.063538 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:51.376877 kubelet[2668]: E1008 19:52:51.376827 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:55.835406 kubelet[2668]: I1008 19:52:55.835354 2668 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:52:55.836207 containerd[1461]: time="2024-10-08T19:52:55.836165112Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:52:55.836482 kubelet[2668]: I1008 19:52:55.836390 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:52:56.244706 kubelet[2668]: I1008 19:52:56.244372 2668 topology_manager.go:215] "Topology Admit Handler" podUID="627afa47-fbf1-4462-a9b7-565c140903f6" podNamespace="kube-system" podName="kube-proxy-dm7d4" Oct 8 19:52:56.254661 systemd[1]: Created slice kubepods-besteffort-pod627afa47_fbf1_4462_a9b7_565c140903f6.slice - libcontainer container kubepods-besteffort-pod627afa47_fbf1_4462_a9b7_565c140903f6.slice. Oct 8 19:52:56.422094 kubelet[2668]: I1008 19:52:56.422017 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/627afa47-fbf1-4462-a9b7-565c140903f6-kube-proxy\") pod \"kube-proxy-dm7d4\" (UID: \"627afa47-fbf1-4462-a9b7-565c140903f6\") " pod="kube-system/kube-proxy-dm7d4" Oct 8 19:52:56.422094 kubelet[2668]: I1008 19:52:56.422082 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/627afa47-fbf1-4462-a9b7-565c140903f6-xtables-lock\") pod \"kube-proxy-dm7d4\" (UID: \"627afa47-fbf1-4462-a9b7-565c140903f6\") " pod="kube-system/kube-proxy-dm7d4" Oct 8 19:52:56.422094 kubelet[2668]: I1008 19:52:56.422109 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/627afa47-fbf1-4462-a9b7-565c140903f6-lib-modules\") pod \"kube-proxy-dm7d4\" (UID: \"627afa47-fbf1-4462-a9b7-565c140903f6\") " pod="kube-system/kube-proxy-dm7d4" Oct 8 19:52:56.422391 kubelet[2668]: I1008 19:52:56.422133 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjw8\" (UniqueName: \"kubernetes.io/projected/627afa47-fbf1-4462-a9b7-565c140903f6-kube-api-access-ttjw8\") pod \"kube-proxy-dm7d4\" (UID: \"627afa47-fbf1-4462-a9b7-565c140903f6\") " pod="kube-system/kube-proxy-dm7d4" Oct 8 19:52:56.504331 kubelet[2668]: I1008 19:52:56.504181 2668 topology_manager.go:215] "Topology Admit Handler" podUID="99da0940-4e75-48d7-9c3e-be4d65de46fc" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-pczhr" Oct 8 19:52:56.511111 systemd[1]: Created slice kubepods-besteffort-pod99da0940_4e75_48d7_9c3e_be4d65de46fc.slice - libcontainer container kubepods-besteffort-pod99da0940_4e75_48d7_9c3e_be4d65de46fc.slice. Oct 8 19:52:56.567805 kubelet[2668]: E1008 19:52:56.567763 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:56.569114 containerd[1461]: time="2024-10-08T19:52:56.569056427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm7d4,Uid:627afa47-fbf1-4462-a9b7-565c140903f6,Namespace:kube-system,Attempt:0,}" Oct 8 19:52:56.598393 containerd[1461]: time="2024-10-08T19:52:56.598103429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:56.598393 containerd[1461]: time="2024-10-08T19:52:56.598177182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:56.598393 containerd[1461]: time="2024-10-08T19:52:56.598187912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:56.598590 containerd[1461]: time="2024-10-08T19:52:56.598411093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:56.623156 kubelet[2668]: I1008 19:52:56.623067 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hj9n\" (UniqueName: \"kubernetes.io/projected/99da0940-4e75-48d7-9c3e-be4d65de46fc-kube-api-access-9hj9n\") pod \"tigera-operator-77f994b5bb-pczhr\" (UID: \"99da0940-4e75-48d7-9c3e-be4d65de46fc\") " pod="tigera-operator/tigera-operator-77f994b5bb-pczhr" Oct 8 19:52:56.623156 kubelet[2668]: I1008 19:52:56.623152 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99da0940-4e75-48d7-9c3e-be4d65de46fc-var-lib-calico\") pod \"tigera-operator-77f994b5bb-pczhr\" (UID: \"99da0940-4e75-48d7-9c3e-be4d65de46fc\") " pod="tigera-operator/tigera-operator-77f994b5bb-pczhr" Oct 8 19:52:56.629701 systemd[1]: Started cri-containerd-d571e0c0f59b3a5c58e86331ce1c040feaca8ca6419f71aca7726995de33d4ad.scope - libcontainer container d571e0c0f59b3a5c58e86331ce1c040feaca8ca6419f71aca7726995de33d4ad. Oct 8 19:52:56.656703 containerd[1461]: time="2024-10-08T19:52:56.656656931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dm7d4,Uid:627afa47-fbf1-4462-a9b7-565c140903f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d571e0c0f59b3a5c58e86331ce1c040feaca8ca6419f71aca7726995de33d4ad\"" Oct 8 19:52:56.657930 kubelet[2668]: E1008 19:52:56.657854 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:56.660440 containerd[1461]: time="2024-10-08T19:52:56.660388141Z" level=info msg="CreateContainer within sandbox \"d571e0c0f59b3a5c58e86331ce1c040feaca8ca6419f71aca7726995de33d4ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:52:56.680244 containerd[1461]: time="2024-10-08T19:52:56.680180415Z" level=info msg="CreateContainer within sandbox \"d571e0c0f59b3a5c58e86331ce1c040feaca8ca6419f71aca7726995de33d4ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"56b8083048bb4829495012e9f37f64ba5060bd6b7def86046c4b6a9eeebc99c4\"" Oct 8 19:52:56.680989 containerd[1461]: time="2024-10-08T19:52:56.680947155Z" level=info msg="StartContainer for \"56b8083048bb4829495012e9f37f64ba5060bd6b7def86046c4b6a9eeebc99c4\"" Oct 8 19:52:56.710092 systemd[1]: Started cri-containerd-56b8083048bb4829495012e9f37f64ba5060bd6b7def86046c4b6a9eeebc99c4.scope - libcontainer container 56b8083048bb4829495012e9f37f64ba5060bd6b7def86046c4b6a9eeebc99c4. Oct 8 19:52:56.743397 containerd[1461]: time="2024-10-08T19:52:56.743335777Z" level=info msg="StartContainer for \"56b8083048bb4829495012e9f37f64ba5060bd6b7def86046c4b6a9eeebc99c4\" returns successfully" Oct 8 19:52:56.815482 containerd[1461]: time="2024-10-08T19:52:56.815410151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-pczhr,Uid:99da0940-4e75-48d7-9c3e-be4d65de46fc,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:52:57.016244 containerd[1461]: time="2024-10-08T19:52:57.016041523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:52:57.016244 containerd[1461]: time="2024-10-08T19:52:57.016128059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:52:57.016244 containerd[1461]: time="2024-10-08T19:52:57.016142167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:57.016841 containerd[1461]: time="2024-10-08T19:52:57.016772402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:52:57.044206 systemd[1]: Started cri-containerd-4a3829dd35c8e2c5ff3b680cd818d243c40f58dd34298f5374c19057c1b225f6.scope - libcontainer container 4a3829dd35c8e2c5ff3b680cd818d243c40f58dd34298f5374c19057c1b225f6. Oct 8 19:52:57.084049 kubelet[2668]: E1008 19:52:57.083856 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:52:57.094152 containerd[1461]: time="2024-10-08T19:52:57.094097189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-pczhr,Uid:99da0940-4e75-48d7-9c3e-be4d65de46fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4a3829dd35c8e2c5ff3b680cd818d243c40f58dd34298f5374c19057c1b225f6\"" Oct 8 19:52:57.105837 containerd[1461]: time="2024-10-08T19:52:57.105785628Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:52:58.766568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1580639552.mount: Deactivated successfully. Oct 8 19:52:59.429905 containerd[1461]: time="2024-10-08T19:52:59.429832964Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:59.431455 containerd[1461]: time="2024-10-08T19:52:59.431406755Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136509" Oct 8 19:52:59.433030 containerd[1461]: time="2024-10-08T19:52:59.432999843Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:59.436023 containerd[1461]: time="2024-10-08T19:52:59.435734619Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:52:59.437010 containerd[1461]: time="2024-10-08T19:52:59.436785242Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.330942444s" Oct 8 19:52:59.437010 containerd[1461]: time="2024-10-08T19:52:59.436823356Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 19:52:59.440387 containerd[1461]: time="2024-10-08T19:52:59.440353273Z" level=info msg="CreateContainer within sandbox \"4a3829dd35c8e2c5ff3b680cd818d243c40f58dd34298f5374c19057c1b225f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:52:59.456037 containerd[1461]: time="2024-10-08T19:52:59.455973552Z" level=info msg="CreateContainer within sandbox \"4a3829dd35c8e2c5ff3b680cd818d243c40f58dd34298f5374c19057c1b225f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"48872947f92e20adb8893494bc4cb78f52fc7c37d5d0fde0cb6bcbe772ecf50c\"" Oct 8 19:52:59.456993 containerd[1461]: time="2024-10-08T19:52:59.456942909Z" level=info msg="StartContainer for \"48872947f92e20adb8893494bc4cb78f52fc7c37d5d0fde0cb6bcbe772ecf50c\"" Oct 8 19:52:59.490131 systemd[1]: Started cri-containerd-48872947f92e20adb8893494bc4cb78f52fc7c37d5d0fde0cb6bcbe772ecf50c.scope - libcontainer container 48872947f92e20adb8893494bc4cb78f52fc7c37d5d0fde0cb6bcbe772ecf50c. Oct 8 19:52:59.584844 containerd[1461]: time="2024-10-08T19:52:59.584758128Z" level=info msg="StartContainer for \"48872947f92e20adb8893494bc4cb78f52fc7c37d5d0fde0cb6bcbe772ecf50c\" returns successfully" Oct 8 19:53:00.181459 kubelet[2668]: I1008 19:53:00.181200 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dm7d4" podStartSLOduration=5.181177082 podStartE2EDuration="5.181177082s" podCreationTimestamp="2024-10-08 19:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:52:57.1000301 +0000 UTC m=+17.153171770" watchObservedRunningTime="2024-10-08 19:53:00.181177082 +0000 UTC m=+20.234318752" Oct 8 19:53:00.181459 kubelet[2668]: I1008 19:53:00.181345 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-pczhr" podStartSLOduration=1.843320019 podStartE2EDuration="4.181338082s" podCreationTimestamp="2024-10-08 19:52:56 +0000 UTC" firstStartedPulling="2024-10-08 19:52:57.100108942 +0000 UTC m=+17.153250602" lastFinishedPulling="2024-10-08 19:52:59.438126995 +0000 UTC m=+19.491268665" observedRunningTime="2024-10-08 19:53:00.181055728 +0000 UTC m=+20.234197398" watchObservedRunningTime="2024-10-08 19:53:00.181338082 +0000 UTC m=+20.234479762" Oct 8 19:53:05.286525 kubelet[2668]: I1008 19:53:05.286461 2668 topology_manager.go:215] "Topology Admit Handler" podUID="58495cd2-0728-44c0-985a-dad79ee99a11" podNamespace="calico-system" podName="calico-typha-fc8844746-r75hj" Oct 8 19:53:05.300307 systemd[1]: Created slice kubepods-besteffort-pod58495cd2_0728_44c0_985a_dad79ee99a11.slice - libcontainer container kubepods-besteffort-pod58495cd2_0728_44c0_985a_dad79ee99a11.slice. Oct 8 19:53:05.454795 kubelet[2668]: I1008 19:53:05.454733 2668 topology_manager.go:215] "Topology Admit Handler" podUID="af7da029-0a07-4b01-9d51-e280fdf089e3" podNamespace="calico-system" podName="calico-node-lgxwr" Oct 8 19:53:05.461837 systemd[1]: Created slice kubepods-besteffort-podaf7da029_0a07_4b01_9d51_e280fdf089e3.slice - libcontainer container kubepods-besteffort-podaf7da029_0a07_4b01_9d51_e280fdf089e3.slice. Oct 8 19:53:05.477889 kubelet[2668]: I1008 19:53:05.477733 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58495cd2-0728-44c0-985a-dad79ee99a11-tigera-ca-bundle\") pod \"calico-typha-fc8844746-r75hj\" (UID: \"58495cd2-0728-44c0-985a-dad79ee99a11\") " pod="calico-system/calico-typha-fc8844746-r75hj" Oct 8 19:53:05.477889 kubelet[2668]: I1008 19:53:05.477832 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkdpw\" (UniqueName: \"kubernetes.io/projected/58495cd2-0728-44c0-985a-dad79ee99a11-kube-api-access-xkdpw\") pod \"calico-typha-fc8844746-r75hj\" (UID: \"58495cd2-0728-44c0-985a-dad79ee99a11\") " pod="calico-system/calico-typha-fc8844746-r75hj" Oct 8 19:53:05.477889 kubelet[2668]: I1008 19:53:05.477871 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/58495cd2-0728-44c0-985a-dad79ee99a11-typha-certs\") pod \"calico-typha-fc8844746-r75hj\" (UID: \"58495cd2-0728-44c0-985a-dad79ee99a11\") " pod="calico-system/calico-typha-fc8844746-r75hj" Oct 8 19:53:05.527405 kubelet[2668]: I1008 19:53:05.527321 2668 topology_manager.go:215] "Topology Admit Handler" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" podNamespace="calico-system" podName="csi-node-driver-46ktg" Oct 8 19:53:05.527804 kubelet[2668]: E1008 19:53:05.527772 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:05.578934 kubelet[2668]: I1008 19:53:05.578769 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-cni-net-dir\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.578934 kubelet[2668]: I1008 19:53:05.578820 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2ht9\" (UniqueName: \"kubernetes.io/projected/af7da029-0a07-4b01-9d51-e280fdf089e3-kube-api-access-x2ht9\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.578934 kubelet[2668]: I1008 19:53:05.578850 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-lib-modules\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.578934 kubelet[2668]: I1008 19:53:05.578884 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-var-lib-calico\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579155 kubelet[2668]: I1008 19:53:05.578944 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-var-run-calico\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579155 kubelet[2668]: I1008 19:53:05.578980 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-cni-bin-dir\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579155 kubelet[2668]: I1008 19:53:05.578999 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-flexvol-driver-host\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579155 kubelet[2668]: I1008 19:53:05.579021 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-policysync\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579155 kubelet[2668]: I1008 19:53:05.579050 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af7da029-0a07-4b01-9d51-e280fdf089e3-node-certs\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579284 kubelet[2668]: I1008 19:53:05.579083 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-xtables-lock\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579284 kubelet[2668]: I1008 19:53:05.579105 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af7da029-0a07-4b01-9d51-e280fdf089e3-tigera-ca-bundle\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.579284 kubelet[2668]: I1008 19:53:05.579126 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af7da029-0a07-4b01-9d51-e280fdf089e3-cni-log-dir\") pod \"calico-node-lgxwr\" (UID: \"af7da029-0a07-4b01-9d51-e280fdf089e3\") " pod="calico-system/calico-node-lgxwr" Oct 8 19:53:05.679653 kubelet[2668]: I1008 19:53:05.679603 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2be15c5-243c-4ea8-a2a3-616319911d83-kubelet-dir\") pod \"csi-node-driver-46ktg\" (UID: \"c2be15c5-243c-4ea8-a2a3-616319911d83\") " pod="calico-system/csi-node-driver-46ktg" Oct 8 19:53:05.679789 kubelet[2668]: I1008 19:53:05.679705 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2be15c5-243c-4ea8-a2a3-616319911d83-registration-dir\") pod \"csi-node-driver-46ktg\" (UID: \"c2be15c5-243c-4ea8-a2a3-616319911d83\") " pod="calico-system/csi-node-driver-46ktg" Oct 8 19:53:05.679857 kubelet[2668]: I1008 19:53:05.679837 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhvw\" (UniqueName: \"kubernetes.io/projected/c2be15c5-243c-4ea8-a2a3-616319911d83-kube-api-access-nrhvw\") pod \"csi-node-driver-46ktg\" (UID: \"c2be15c5-243c-4ea8-a2a3-616319911d83\") " pod="calico-system/csi-node-driver-46ktg" Oct 8 19:53:05.680049 kubelet[2668]: I1008 19:53:05.679891 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c2be15c5-243c-4ea8-a2a3-616319911d83-varrun\") pod \"csi-node-driver-46ktg\" (UID: \"c2be15c5-243c-4ea8-a2a3-616319911d83\") " pod="calico-system/csi-node-driver-46ktg" Oct 8 19:53:05.680049 kubelet[2668]: I1008 19:53:05.679970 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2be15c5-243c-4ea8-a2a3-616319911d83-socket-dir\") pod \"csi-node-driver-46ktg\" (UID: \"c2be15c5-243c-4ea8-a2a3-616319911d83\") " pod="calico-system/csi-node-driver-46ktg" Oct 8 19:53:05.680681 kubelet[2668]: E1008 19:53:05.680661 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.680839 kubelet[2668]: W1008 19:53:05.680744 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.680839 kubelet[2668]: E1008 19:53:05.680777 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.681018 kubelet[2668]: E1008 19:53:05.680983 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.681018 kubelet[2668]: W1008 19:53:05.680996 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.681018 kubelet[2668]: E1008 19:53:05.681009 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.681218 kubelet[2668]: E1008 19:53:05.681196 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.681218 kubelet[2668]: W1008 19:53:05.681206 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.681218 kubelet[2668]: E1008 19:53:05.681215 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.681453 kubelet[2668]: E1008 19:53:05.681398 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.681453 kubelet[2668]: W1008 19:53:05.681405 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.681453 kubelet[2668]: E1008 19:53:05.681416 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.681715 kubelet[2668]: E1008 19:53:05.681563 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.681715 kubelet[2668]: W1008 19:53:05.681570 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.681715 kubelet[2668]: E1008 19:53:05.681582 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.681899 kubelet[2668]: E1008 19:53:05.681724 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.681899 kubelet[2668]: W1008 19:53:05.681731 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.681899 kubelet[2668]: E1008 19:53:05.681745 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.682157 kubelet[2668]: E1008 19:53:05.681902 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.682157 kubelet[2668]: W1008 19:53:05.681910 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.682157 kubelet[2668]: E1008 19:53:05.681943 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.682370 kubelet[2668]: E1008 19:53:05.682187 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.682370 kubelet[2668]: W1008 19:53:05.682198 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.682370 kubelet[2668]: E1008 19:53:05.682216 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.682476 kubelet[2668]: E1008 19:53:05.682461 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.682476 kubelet[2668]: W1008 19:53:05.682472 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.683247 kubelet[2668]: E1008 19:53:05.682489 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.683247 kubelet[2668]: E1008 19:53:05.682746 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.683247 kubelet[2668]: W1008 19:53:05.682765 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.683247 kubelet[2668]: E1008 19:53:05.682785 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.683247 kubelet[2668]: E1008 19:53:05.682999 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.683247 kubelet[2668]: W1008 19:53:05.683008 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.683247 kubelet[2668]: E1008 19:53:05.683021 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.683702 kubelet[2668]: E1008 19:53:05.683417 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.683702 kubelet[2668]: W1008 19:53:05.683428 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.683702 kubelet[2668]: E1008 19:53:05.683447 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.683871 kubelet[2668]: E1008 19:53:05.683855 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.683871 kubelet[2668]: W1008 19:53:05.683869 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.683950 kubelet[2668]: E1008 19:53:05.683887 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.684144 kubelet[2668]: E1008 19:53:05.684129 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.684144 kubelet[2668]: W1008 19:53:05.684141 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.684249 kubelet[2668]: E1008 19:53:05.684218 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.684368 kubelet[2668]: E1008 19:53:05.684352 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.684368 kubelet[2668]: W1008 19:53:05.684366 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.684465 kubelet[2668]: E1008 19:53:05.684452 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.684644 kubelet[2668]: E1008 19:53:05.684628 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.684644 kubelet[2668]: W1008 19:53:05.684642 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.684818 kubelet[2668]: E1008 19:53:05.684756 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.684853 kubelet[2668]: E1008 19:53:05.684846 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.684874 kubelet[2668]: W1008 19:53:05.684854 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.684897 kubelet[2668]: E1008 19:53:05.684879 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.685123 kubelet[2668]: E1008 19:53:05.685110 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.685123 kubelet[2668]: W1008 19:53:05.685121 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.685188 kubelet[2668]: E1008 19:53:05.685146 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.685399 kubelet[2668]: E1008 19:53:05.685387 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.685430 kubelet[2668]: W1008 19:53:05.685399 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.685430 kubelet[2668]: E1008 19:53:05.685423 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.685652 kubelet[2668]: E1008 19:53:05.685640 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.685652 kubelet[2668]: W1008 19:53:05.685651 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.685716 kubelet[2668]: E1008 19:53:05.685666 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.685876 kubelet[2668]: E1008 19:53:05.685864 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.685903 kubelet[2668]: W1008 19:53:05.685876 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.685903 kubelet[2668]: E1008 19:53:05.685891 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.686125 kubelet[2668]: E1008 19:53:05.686111 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.686162 kubelet[2668]: W1008 19:53:05.686124 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.686162 kubelet[2668]: E1008 19:53:05.686149 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.686330 kubelet[2668]: E1008 19:53:05.686316 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.686330 kubelet[2668]: W1008 19:53:05.686329 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.686476 kubelet[2668]: E1008 19:53:05.686421 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.686562 kubelet[2668]: E1008 19:53:05.686548 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.686599 kubelet[2668]: W1008 19:53:05.686561 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.686742 kubelet[2668]: E1008 19:53:05.686724 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.686879 kubelet[2668]: E1008 19:53:05.686866 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.686911 kubelet[2668]: W1008 19:53:05.686880 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.686948 kubelet[2668]: E1008 19:53:05.686936 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.687187 kubelet[2668]: E1008 19:53:05.687171 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.687187 kubelet[2668]: W1008 19:53:05.687185 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.687285 kubelet[2668]: E1008 19:53:05.687233 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.687440 kubelet[2668]: E1008 19:53:05.687419 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.687440 kubelet[2668]: W1008 19:53:05.687429 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.687511 kubelet[2668]: E1008 19:53:05.687454 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.687612 kubelet[2668]: E1008 19:53:05.687600 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.687612 kubelet[2668]: W1008 19:53:05.687608 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.687685 kubelet[2668]: E1008 19:53:05.687627 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.687796 kubelet[2668]: E1008 19:53:05.687779 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.687796 kubelet[2668]: W1008 19:53:05.687792 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.687881 kubelet[2668]: E1008 19:53:05.687808 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.688100 kubelet[2668]: E1008 19:53:05.688085 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.688100 kubelet[2668]: W1008 19:53:05.688097 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.688163 kubelet[2668]: E1008 19:53:05.688111 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.688312 kubelet[2668]: E1008 19:53:05.688300 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.688353 kubelet[2668]: W1008 19:53:05.688311 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.688353 kubelet[2668]: E1008 19:53:05.688326 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.688536 kubelet[2668]: E1008 19:53:05.688523 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.688536 kubelet[2668]: W1008 19:53:05.688535 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.688596 kubelet[2668]: E1008 19:53:05.688557 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.688751 kubelet[2668]: E1008 19:53:05.688738 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.688751 kubelet[2668]: W1008 19:53:05.688750 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.688821 kubelet[2668]: E1008 19:53:05.688779 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.689004 kubelet[2668]: E1008 19:53:05.688982 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.689004 kubelet[2668]: W1008 19:53:05.688995 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.689068 kubelet[2668]: E1008 19:53:05.689010 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.689232 kubelet[2668]: E1008 19:53:05.689218 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.689232 kubelet[2668]: W1008 19:53:05.689230 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.689291 kubelet[2668]: E1008 19:53:05.689239 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.780555 kubelet[2668]: E1008 19:53:05.780515 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.780555 kubelet[2668]: W1008 19:53:05.780540 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.780555 kubelet[2668]: E1008 19:53:05.780564 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.780809 kubelet[2668]: E1008 19:53:05.780791 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.780809 kubelet[2668]: W1008 19:53:05.780804 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.780863 kubelet[2668]: E1008 19:53:05.780822 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.781106 kubelet[2668]: E1008 19:53:05.781086 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.781106 kubelet[2668]: W1008 19:53:05.781098 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.781160 kubelet[2668]: E1008 19:53:05.781116 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.781367 kubelet[2668]: E1008 19:53:05.781353 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.781413 kubelet[2668]: W1008 19:53:05.781368 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.781413 kubelet[2668]: E1008 19:53:05.781386 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.781726 kubelet[2668]: E1008 19:53:05.781696 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.781766 kubelet[2668]: W1008 19:53:05.781726 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.781766 kubelet[2668]: E1008 19:53:05.781757 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.782000 kubelet[2668]: E1008 19:53:05.781986 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.782000 kubelet[2668]: W1008 19:53:05.781997 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.782074 kubelet[2668]: E1008 19:53:05.782012 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.782261 kubelet[2668]: E1008 19:53:05.782247 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.782261 kubelet[2668]: W1008 19:53:05.782258 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.782303 kubelet[2668]: E1008 19:53:05.782273 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.782569 kubelet[2668]: E1008 19:53:05.782554 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.782598 kubelet[2668]: W1008 19:53:05.782569 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.782598 kubelet[2668]: E1008 19:53:05.782588 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.782805 kubelet[2668]: E1008 19:53:05.782793 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.782805 kubelet[2668]: W1008 19:53:05.782804 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.782857 kubelet[2668]: E1008 19:53:05.782837 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.783048 kubelet[2668]: E1008 19:53:05.783032 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.783048 kubelet[2668]: W1008 19:53:05.783044 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.783115 kubelet[2668]: E1008 19:53:05.783084 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.783288 kubelet[2668]: E1008 19:53:05.783273 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.783288 kubelet[2668]: W1008 19:53:05.783286 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.783337 kubelet[2668]: E1008 19:53:05.783315 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.783525 kubelet[2668]: E1008 19:53:05.783510 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.783525 kubelet[2668]: W1008 19:53:05.783522 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.783583 kubelet[2668]: E1008 19:53:05.783539 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.783766 kubelet[2668]: E1008 19:53:05.783751 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.783766 kubelet[2668]: W1008 19:53:05.783763 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.783811 kubelet[2668]: E1008 19:53:05.783778 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.784040 kubelet[2668]: E1008 19:53:05.784027 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.784040 kubelet[2668]: W1008 19:53:05.784039 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.784104 kubelet[2668]: E1008 19:53:05.784055 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.784298 kubelet[2668]: E1008 19:53:05.784284 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.784325 kubelet[2668]: W1008 19:53:05.784297 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.784325 kubelet[2668]: E1008 19:53:05.784315 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.784609 kubelet[2668]: E1008 19:53:05.784596 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.784638 kubelet[2668]: W1008 19:53:05.784613 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.784666 kubelet[2668]: E1008 19:53:05.784642 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.784843 kubelet[2668]: E1008 19:53:05.784828 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.784843 kubelet[2668]: W1008 19:53:05.784838 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.784896 kubelet[2668]: E1008 19:53:05.784870 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.785108 kubelet[2668]: E1008 19:53:05.785093 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.785108 kubelet[2668]: W1008 19:53:05.785106 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.785184 kubelet[2668]: E1008 19:53:05.785142 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.785335 kubelet[2668]: E1008 19:53:05.785319 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.785335 kubelet[2668]: W1008 19:53:05.785332 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.785389 kubelet[2668]: E1008 19:53:05.785349 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.785588 kubelet[2668]: E1008 19:53:05.785570 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.785588 kubelet[2668]: W1008 19:53:05.785583 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.785639 kubelet[2668]: E1008 19:53:05.785599 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.785824 kubelet[2668]: E1008 19:53:05.785807 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.785824 kubelet[2668]: W1008 19:53:05.785819 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.785879 kubelet[2668]: E1008 19:53:05.785837 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.786104 kubelet[2668]: E1008 19:53:05.786088 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.786104 kubelet[2668]: W1008 19:53:05.786102 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.786167 kubelet[2668]: E1008 19:53:05.786119 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.786337 kubelet[2668]: E1008 19:53:05.786323 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.786337 kubelet[2668]: W1008 19:53:05.786334 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.786387 kubelet[2668]: E1008 19:53:05.786348 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.786547 kubelet[2668]: E1008 19:53:05.786534 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.786547 kubelet[2668]: W1008 19:53:05.786544 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.786596 kubelet[2668]: E1008 19:53:05.786557 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.786784 kubelet[2668]: E1008 19:53:05.786770 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.786784 kubelet[2668]: W1008 19:53:05.786781 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.786828 kubelet[2668]: E1008 19:53:05.786794 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.787041 kubelet[2668]: E1008 19:53:05.787028 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.787041 kubelet[2668]: W1008 19:53:05.787038 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.787099 kubelet[2668]: E1008 19:53:05.787052 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.787288 kubelet[2668]: E1008 19:53:05.787273 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.787288 kubelet[2668]: W1008 19:53:05.787285 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.787339 kubelet[2668]: E1008 19:53:05.787293 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.885876 kubelet[2668]: E1008 19:53:05.885758 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.885876 kubelet[2668]: W1008 19:53:05.885790 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.885876 kubelet[2668]: E1008 19:53:05.885816 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.886180 kubelet[2668]: E1008 19:53:05.886146 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.886180 kubelet[2668]: W1008 19:53:05.886165 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.886180 kubelet[2668]: E1008 19:53:05.886185 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.886567 kubelet[2668]: E1008 19:53:05.886526 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.886567 kubelet[2668]: W1008 19:53:05.886560 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.886645 kubelet[2668]: E1008 19:53:05.886591 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.943824 kubelet[2668]: E1008 19:53:05.943781 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.943824 kubelet[2668]: W1008 19:53:05.943811 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.944232 kubelet[2668]: E1008 19:53:05.943857 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.945754 kubelet[2668]: E1008 19:53:05.945721 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.945754 kubelet[2668]: W1008 19:53:05.945751 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.945848 kubelet[2668]: E1008 19:53:05.945763 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:05.947866 kubelet[2668]: E1008 19:53:05.947824 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:53:05.947866 kubelet[2668]: W1008 19:53:05.947846 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:53:05.947866 kubelet[2668]: E1008 19:53:05.947871 2668 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:53:06.065445 kubelet[2668]: E1008 19:53:06.065342 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:06.066150 containerd[1461]: time="2024-10-08T19:53:06.066086852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lgxwr,Uid:af7da029-0a07-4b01-9d51-e280fdf089e3,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:06.204543 kubelet[2668]: E1008 19:53:06.204405 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:06.205078 containerd[1461]: time="2024-10-08T19:53:06.205016991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc8844746-r75hj,Uid:58495cd2-0728-44c0-985a-dad79ee99a11,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:06.295440 containerd[1461]: time="2024-10-08T19:53:06.295103358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:06.295440 containerd[1461]: time="2024-10-08T19:53:06.295198791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:06.300803 containerd[1461]: time="2024-10-08T19:53:06.295709279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:06.300803 containerd[1461]: time="2024-10-08T19:53:06.295969648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:06.321281 systemd[1]: Started cri-containerd-3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0.scope - libcontainer container 3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0. Oct 8 19:53:06.338751 containerd[1461]: time="2024-10-08T19:53:06.338326561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:06.338751 containerd[1461]: time="2024-10-08T19:53:06.338398930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:06.338751 containerd[1461]: time="2024-10-08T19:53:06.338420391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:06.338751 containerd[1461]: time="2024-10-08T19:53:06.338530301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:06.370559 containerd[1461]: time="2024-10-08T19:53:06.370490798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lgxwr,Uid:af7da029-0a07-4b01-9d51-e280fdf089e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0\"" Oct 8 19:53:06.371826 kubelet[2668]: E1008 19:53:06.371430 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:06.371539 systemd[1]: Started cri-containerd-a5eca0398f2eecb78178ba4dd551a1bceaf6bcdbcba84322357bdf96d894194f.scope - libcontainer container a5eca0398f2eecb78178ba4dd551a1bceaf6bcdbcba84322357bdf96d894194f. Oct 8 19:53:06.373724 containerd[1461]: time="2024-10-08T19:53:06.373091094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:53:06.415999 containerd[1461]: time="2024-10-08T19:53:06.415951121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc8844746-r75hj,Uid:58495cd2-0728-44c0-985a-dad79ee99a11,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5eca0398f2eecb78178ba4dd551a1bceaf6bcdbcba84322357bdf96d894194f\"" Oct 8 19:53:06.417114 kubelet[2668]: E1008 19:53:06.417082 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:07.039180 kubelet[2668]: E1008 19:53:07.039120 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:09.039179 kubelet[2668]: E1008 19:53:09.039088 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:09.059536 containerd[1461]: time="2024-10-08T19:53:09.059367104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:09.062016 containerd[1461]: time="2024-10-08T19:53:09.061227836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 19:53:09.064947 containerd[1461]: time="2024-10-08T19:53:09.064316359Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:09.069217 containerd[1461]: time="2024-10-08T19:53:09.069164251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:09.071761 containerd[1461]: time="2024-10-08T19:53:09.071710736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 2.698581019s" Oct 8 19:53:09.071889 containerd[1461]: time="2024-10-08T19:53:09.071855033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 19:53:09.079948 containerd[1461]: time="2024-10-08T19:53:09.077291090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:53:09.079948 containerd[1461]: time="2024-10-08T19:53:09.078701281Z" level=info msg="CreateContainer within sandbox \"3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:53:09.118628 containerd[1461]: time="2024-10-08T19:53:09.118571206Z" level=info msg="CreateContainer within sandbox \"3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce\"" Oct 8 19:53:09.119430 containerd[1461]: time="2024-10-08T19:53:09.119365917Z" level=info msg="StartContainer for \"bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce\"" Oct 8 19:53:09.157148 systemd[1]: Started cri-containerd-bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce.scope - libcontainer container bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce. Oct 8 19:53:09.199440 containerd[1461]: time="2024-10-08T19:53:09.199389622Z" level=info msg="StartContainer for \"bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce\" returns successfully" Oct 8 19:53:09.216148 systemd[1]: cri-containerd-bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce.scope: Deactivated successfully. Oct 8 19:53:09.240824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce-rootfs.mount: Deactivated successfully. Oct 8 19:53:09.365808 containerd[1461]: time="2024-10-08T19:53:09.363073513Z" level=info msg="shim disconnected" id=bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce namespace=k8s.io Oct 8 19:53:09.365808 containerd[1461]: time="2024-10-08T19:53:09.365700373Z" level=warning msg="cleaning up after shim disconnected" id=bdfd9c6dc9d09bcf031cd9f4ff343e8410601e47a66c7f33cd3dc66766f902ce namespace=k8s.io Oct 8 19:53:09.365808 containerd[1461]: time="2024-10-08T19:53:09.365720151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:53:10.116322 kubelet[2668]: E1008 19:53:10.116279 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:10.946455 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:57968.service - OpenSSH per-connection server daemon (10.0.0.1:57968). Oct 8 19:53:10.989040 sshd[3279]: Accepted publickey for core from 10.0.0.1 port 57968 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:10.991945 sshd[3279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:10.997709 systemd-logind[1446]: New session 8 of user core. Oct 8 19:53:11.004255 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:53:11.038825 kubelet[2668]: E1008 19:53:11.038740 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:11.203855 sshd[3279]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:11.209230 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:57968.service: Deactivated successfully. Oct 8 19:53:11.211602 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:53:11.212713 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:53:11.214418 systemd-logind[1446]: Removed session 8. Oct 8 19:53:11.955607 containerd[1461]: time="2024-10-08T19:53:11.955516520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:11.957645 containerd[1461]: time="2024-10-08T19:53:11.957562915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 19:53:11.958834 containerd[1461]: time="2024-10-08T19:53:11.958778751Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:11.961283 containerd[1461]: time="2024-10-08T19:53:11.961224668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:11.962008 containerd[1461]: time="2024-10-08T19:53:11.961960727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.884624239s" Oct 8 19:53:11.962008 containerd[1461]: time="2024-10-08T19:53:11.962005222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 19:53:11.963956 containerd[1461]: time="2024-10-08T19:53:11.963376264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:53:11.976526 containerd[1461]: time="2024-10-08T19:53:11.976479127Z" level=info msg="CreateContainer within sandbox \"a5eca0398f2eecb78178ba4dd551a1bceaf6bcdbcba84322357bdf96d894194f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:53:11.993939 containerd[1461]: time="2024-10-08T19:53:11.993873319Z" level=info msg="CreateContainer within sandbox \"a5eca0398f2eecb78178ba4dd551a1bceaf6bcdbcba84322357bdf96d894194f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7d55087a7922e275ebfa49ad965d5901a0fb49a8846652af2f7623f1585cb7c\"" Oct 8 19:53:11.994806 containerd[1461]: time="2024-10-08T19:53:11.994716632Z" level=info msg="StartContainer for \"c7d55087a7922e275ebfa49ad965d5901a0fb49a8846652af2f7623f1585cb7c\"" Oct 8 19:53:12.025093 systemd[1]: Started cri-containerd-c7d55087a7922e275ebfa49ad965d5901a0fb49a8846652af2f7623f1585cb7c.scope - libcontainer container c7d55087a7922e275ebfa49ad965d5901a0fb49a8846652af2f7623f1585cb7c. Oct 8 19:53:12.098930 containerd[1461]: time="2024-10-08T19:53:12.098851385Z" level=info msg="StartContainer for \"c7d55087a7922e275ebfa49ad965d5901a0fb49a8846652af2f7623f1585cb7c\" returns successfully" Oct 8 19:53:12.133028 kubelet[2668]: E1008 19:53:12.132969 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:12.164729 kubelet[2668]: I1008 19:53:12.164639 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fc8844746-r75hj" podStartSLOduration=1.619563808 podStartE2EDuration="7.164618407s" podCreationTimestamp="2024-10-08 19:53:05 +0000 UTC" firstStartedPulling="2024-10-08 19:53:06.417827439 +0000 UTC m=+26.470969109" lastFinishedPulling="2024-10-08 19:53:11.962882048 +0000 UTC m=+32.016023708" observedRunningTime="2024-10-08 19:53:12.163960119 +0000 UTC m=+32.217101799" watchObservedRunningTime="2024-10-08 19:53:12.164618407 +0000 UTC m=+32.217760077" Oct 8 19:53:13.039351 kubelet[2668]: E1008 19:53:13.039248 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:13.134739 kubelet[2668]: I1008 19:53:13.134681 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:53:13.135441 kubelet[2668]: E1008 19:53:13.135416 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:15.038739 kubelet[2668]: E1008 19:53:15.038656 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:16.215585 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:57982.service - OpenSSH per-connection server daemon (10.0.0.1:57982). Oct 8 19:53:16.275182 sshd[3338]: Accepted publickey for core from 10.0.0.1 port 57982 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:16.276953 sshd[3338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:16.282275 systemd-logind[1446]: New session 9 of user core. Oct 8 19:53:16.297121 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:53:16.413929 sshd[3338]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:16.418152 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:57982.service: Deactivated successfully. Oct 8 19:53:16.420228 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:53:16.420889 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:53:16.421854 systemd-logind[1446]: Removed session 9. Oct 8 19:53:18.875965 kubelet[2668]: E1008 19:53:18.875829 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:20.750372 containerd[1461]: time="2024-10-08T19:53:20.750289682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:20.751395 containerd[1461]: time="2024-10-08T19:53:20.751326819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 19:53:20.752939 containerd[1461]: time="2024-10-08T19:53:20.752869408Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:20.755592 containerd[1461]: time="2024-10-08T19:53:20.755538496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:20.756633 containerd[1461]: time="2024-10-08T19:53:20.756591202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 8.793181544s" Oct 8 19:53:20.756687 containerd[1461]: time="2024-10-08T19:53:20.756631890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 19:53:20.760034 containerd[1461]: time="2024-10-08T19:53:20.759981734Z" level=info msg="CreateContainer within sandbox \"3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:53:20.781681 containerd[1461]: time="2024-10-08T19:53:20.781618181Z" level=info msg="CreateContainer within sandbox \"3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58\"" Oct 8 19:53:20.788770 containerd[1461]: time="2024-10-08T19:53:20.788722010Z" level=info msg="StartContainer for \"1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58\"" Oct 8 19:53:20.833753 systemd[1]: Started cri-containerd-1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58.scope - libcontainer container 1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58. Oct 8 19:53:21.067749 kubelet[2668]: E1008 19:53:21.067668 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:21.160786 containerd[1461]: time="2024-10-08T19:53:21.160717472Z" level=info msg="StartContainer for \"1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58\" returns successfully" Oct 8 19:53:21.430876 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:34400.service - OpenSSH per-connection server daemon (10.0.0.1:34400). Oct 8 19:53:21.478486 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 34400 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:21.480227 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:21.484212 systemd-logind[1446]: New session 10 of user core. Oct 8 19:53:21.494054 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:53:21.613909 kubelet[2668]: I1008 19:53:21.613850 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:53:21.614603 kubelet[2668]: E1008 19:53:21.614575 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:22.151069 sshd[3399]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:22.156313 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:34400.service: Deactivated successfully. Oct 8 19:53:22.158417 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:53:22.159277 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:53:22.160291 systemd-logind[1446]: Removed session 10. Oct 8 19:53:22.167384 kubelet[2668]: E1008 19:53:22.167355 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:22.167707 kubelet[2668]: E1008 19:53:22.167667 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:22.809909 systemd[1]: cri-containerd-1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58.scope: Deactivated successfully. Oct 8 19:53:22.832457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58-rootfs.mount: Deactivated successfully. Oct 8 19:53:22.897568 kubelet[2668]: I1008 19:53:22.897519 2668 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:53:23.046330 systemd[1]: Created slice kubepods-besteffort-podc2be15c5_243c_4ea8_a2a3_616319911d83.slice - libcontainer container kubepods-besteffort-podc2be15c5_243c_4ea8_a2a3_616319911d83.slice. Oct 8 19:53:23.062608 containerd[1461]: time="2024-10-08T19:53:23.062492963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46ktg,Uid:c2be15c5-243c-4ea8-a2a3-616319911d83,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:23.069135 kubelet[2668]: I1008 19:53:23.069072 2668 topology_manager.go:215] "Topology Admit Handler" podUID="c1fbc285-e14b-4647-ab1f-3d69ffb9be3b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pcbsv" Oct 8 19:53:23.073348 kubelet[2668]: I1008 19:53:23.073287 2668 topology_manager.go:215] "Topology Admit Handler" podUID="2ef45011-2762-475b-836a-fed77ffc1a96" podNamespace="calico-system" podName="calico-kube-controllers-75665d5dcd-2wtg4" Oct 8 19:53:23.073510 kubelet[2668]: I1008 19:53:23.073477 2668 topology_manager.go:215] "Topology Admit Handler" podUID="57d1dbbc-3c1e-49e7-917f-8d2167c92f3d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-84xkt" Oct 8 19:53:23.081879 systemd[1]: Created slice kubepods-burstable-podc1fbc285_e14b_4647_ab1f_3d69ffb9be3b.slice - libcontainer container kubepods-burstable-podc1fbc285_e14b_4647_ab1f_3d69ffb9be3b.slice. Oct 8 19:53:23.086854 systemd[1]: Created slice kubepods-besteffort-pod2ef45011_2762_475b_836a_fed77ffc1a96.slice - libcontainer container kubepods-besteffort-pod2ef45011_2762_475b_836a_fed77ffc1a96.slice. Oct 8 19:53:23.092617 systemd[1]: Created slice kubepods-burstable-pod57d1dbbc_3c1e_49e7_917f_8d2167c92f3d.slice - libcontainer container kubepods-burstable-pod57d1dbbc_3c1e_49e7_917f_8d2167c92f3d.slice. Oct 8 19:53:23.155466 containerd[1461]: time="2024-10-08T19:53:23.155365803Z" level=info msg="shim disconnected" id=1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58 namespace=k8s.io Oct 8 19:53:23.155466 containerd[1461]: time="2024-10-08T19:53:23.155450986Z" level=warning msg="cleaning up after shim disconnected" id=1585aaa9c970c25303944db961e1be050f4ef9566c55b42ab4764f3d723fbb58 namespace=k8s.io Oct 8 19:53:23.155466 containerd[1461]: time="2024-10-08T19:53:23.155465042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:53:23.169806 kubelet[2668]: E1008 19:53:23.169555 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:23.184987 kubelet[2668]: I1008 19:53:23.184456 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nhk\" (UniqueName: \"kubernetes.io/projected/57d1dbbc-3c1e-49e7-917f-8d2167c92f3d-kube-api-access-67nhk\") pod \"coredns-7db6d8ff4d-84xkt\" (UID: \"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d\") " pod="kube-system/coredns-7db6d8ff4d-84xkt" Oct 8 19:53:23.184987 kubelet[2668]: I1008 19:53:23.184566 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ef45011-2762-475b-836a-fed77ffc1a96-tigera-ca-bundle\") pod \"calico-kube-controllers-75665d5dcd-2wtg4\" (UID: \"2ef45011-2762-475b-836a-fed77ffc1a96\") " pod="calico-system/calico-kube-controllers-75665d5dcd-2wtg4" Oct 8 19:53:23.184987 kubelet[2668]: I1008 19:53:23.184596 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57d1dbbc-3c1e-49e7-917f-8d2167c92f3d-config-volume\") pod \"coredns-7db6d8ff4d-84xkt\" (UID: \"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d\") " pod="kube-system/coredns-7db6d8ff4d-84xkt" Oct 8 19:53:23.184987 kubelet[2668]: I1008 19:53:23.184644 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmml\" (UniqueName: \"kubernetes.io/projected/2ef45011-2762-475b-836a-fed77ffc1a96-kube-api-access-2hmml\") pod \"calico-kube-controllers-75665d5dcd-2wtg4\" (UID: \"2ef45011-2762-475b-836a-fed77ffc1a96\") " pod="calico-system/calico-kube-controllers-75665d5dcd-2wtg4" Oct 8 19:53:23.184987 kubelet[2668]: I1008 19:53:23.184684 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nktw\" (UniqueName: \"kubernetes.io/projected/c1fbc285-e14b-4647-ab1f-3d69ffb9be3b-kube-api-access-2nktw\") pod \"coredns-7db6d8ff4d-pcbsv\" (UID: \"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b\") " pod="kube-system/coredns-7db6d8ff4d-pcbsv" Oct 8 19:53:23.185220 kubelet[2668]: I1008 19:53:23.184720 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1fbc285-e14b-4647-ab1f-3d69ffb9be3b-config-volume\") pod \"coredns-7db6d8ff4d-pcbsv\" (UID: \"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b\") " pod="kube-system/coredns-7db6d8ff4d-pcbsv" Oct 8 19:53:23.249628 containerd[1461]: time="2024-10-08T19:53:23.249559618Z" level=error msg="Failed to destroy network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.250108 containerd[1461]: time="2024-10-08T19:53:23.250057396Z" level=error msg="encountered an error cleaning up failed sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.250108 containerd[1461]: time="2024-10-08T19:53:23.250113633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46ktg,Uid:c2be15c5-243c-4ea8-a2a3-616319911d83,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.250658 kubelet[2668]: E1008 19:53:23.250549 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.250797 kubelet[2668]: E1008 19:53:23.250693 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46ktg" Oct 8 19:53:23.250797 kubelet[2668]: E1008 19:53:23.250731 2668 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-46ktg" Oct 8 19:53:23.250880 kubelet[2668]: E1008 19:53:23.250812 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-46ktg_calico-system(c2be15c5-243c-4ea8-a2a3-616319911d83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-46ktg_calico-system(c2be15c5-243c-4ea8-a2a3-616319911d83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:23.252288 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab-shm.mount: Deactivated successfully. Oct 8 19:53:23.386436 kubelet[2668]: E1008 19:53:23.386206 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:23.387572 containerd[1461]: time="2024-10-08T19:53:23.387497753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pcbsv,Uid:c1fbc285-e14b-4647-ab1f-3d69ffb9be3b,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:23.390626 containerd[1461]: time="2024-10-08T19:53:23.390584480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75665d5dcd-2wtg4,Uid:2ef45011-2762-475b-836a-fed77ffc1a96,Namespace:calico-system,Attempt:0,}" Oct 8 19:53:23.394879 kubelet[2668]: E1008 19:53:23.394823 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:23.395349 containerd[1461]: time="2024-10-08T19:53:23.395302413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84xkt,Uid:57d1dbbc-3c1e-49e7-917f-8d2167c92f3d,Namespace:kube-system,Attempt:0,}" Oct 8 19:53:23.486692 containerd[1461]: time="2024-10-08T19:53:23.486624831Z" level=error msg="Failed to destroy network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.487135 containerd[1461]: time="2024-10-08T19:53:23.487104975Z" level=error msg="encountered an error cleaning up failed sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.487200 containerd[1461]: time="2024-10-08T19:53:23.487161402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pcbsv,Uid:c1fbc285-e14b-4647-ab1f-3d69ffb9be3b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.487455 kubelet[2668]: E1008 19:53:23.487403 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.487521 kubelet[2668]: E1008 19:53:23.487481 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pcbsv" Oct 8 19:53:23.487521 kubelet[2668]: E1008 19:53:23.487502 2668 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pcbsv" Oct 8 19:53:23.487571 kubelet[2668]: E1008 19:53:23.487552 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pcbsv_kube-system(c1fbc285-e14b-4647-ab1f-3d69ffb9be3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pcbsv_kube-system(c1fbc285-e14b-4647-ab1f-3d69ffb9be3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pcbsv" podUID="c1fbc285-e14b-4647-ab1f-3d69ffb9be3b" Oct 8 19:53:23.487875 containerd[1461]: time="2024-10-08T19:53:23.487818053Z" level=error msg="Failed to destroy network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.488316 containerd[1461]: time="2024-10-08T19:53:23.488278639Z" level=error msg="encountered an error cleaning up failed sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.488380 containerd[1461]: time="2024-10-08T19:53:23.488334616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75665d5dcd-2wtg4,Uid:2ef45011-2762-475b-836a-fed77ffc1a96,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.488482 kubelet[2668]: E1008 19:53:23.488449 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.488542 kubelet[2668]: E1008 19:53:23.488490 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75665d5dcd-2wtg4" Oct 8 19:53:23.488542 kubelet[2668]: E1008 19:53:23.488513 2668 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75665d5dcd-2wtg4" Oct 8 19:53:23.488607 kubelet[2668]: E1008 19:53:23.488551 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75665d5dcd-2wtg4_calico-system(2ef45011-2762-475b-836a-fed77ffc1a96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75665d5dcd-2wtg4_calico-system(2ef45011-2762-475b-836a-fed77ffc1a96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75665d5dcd-2wtg4" podUID="2ef45011-2762-475b-836a-fed77ffc1a96" Oct 8 19:53:23.498795 containerd[1461]: time="2024-10-08T19:53:23.498732373Z" level=error msg="Failed to destroy network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.499202 containerd[1461]: time="2024-10-08T19:53:23.499159648Z" level=error msg="encountered an error cleaning up failed sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.499271 containerd[1461]: time="2024-10-08T19:53:23.499231134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84xkt,Uid:57d1dbbc-3c1e-49e7-917f-8d2167c92f3d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.499459 kubelet[2668]: E1008 19:53:23.499430 2668 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:23.499534 kubelet[2668]: E1008 19:53:23.499479 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-84xkt" Oct 8 19:53:23.499534 kubelet[2668]: E1008 19:53:23.499499 2668 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-84xkt" Oct 8 19:53:23.499611 kubelet[2668]: E1008 19:53:23.499542 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-84xkt_kube-system(57d1dbbc-3c1e-49e7-917f-8d2167c92f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-84xkt_kube-system(57d1dbbc-3c1e-49e7-917f-8d2167c92f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-84xkt" podUID="57d1dbbc-3c1e-49e7-917f-8d2167c92f3d" Oct 8 19:53:24.172335 kubelet[2668]: I1008 19:53:24.172301 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:24.172980 containerd[1461]: time="2024-10-08T19:53:24.172953771Z" level=info msg="StopPodSandbox for \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\"" Oct 8 19:53:24.173309 containerd[1461]: time="2024-10-08T19:53:24.173117162Z" level=info msg="Ensure that sandbox 87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25 in task-service has been cleanup successfully" Oct 8 19:53:24.173552 kubelet[2668]: I1008 19:53:24.173536 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Oct 8 19:53:24.173996 containerd[1461]: time="2024-10-08T19:53:24.173970877Z" level=info msg="StopPodSandbox for \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\"" Oct 8 19:53:24.174158 containerd[1461]: time="2024-10-08T19:53:24.174127084Z" level=info msg="Ensure that sandbox 99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab in task-service has been cleanup successfully" Oct 8 19:53:24.177122 kubelet[2668]: E1008 19:53:24.176681 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:24.177789 containerd[1461]: time="2024-10-08T19:53:24.177744430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:53:24.179899 kubelet[2668]: I1008 19:53:24.179862 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:24.180901 containerd[1461]: time="2024-10-08T19:53:24.180852115Z" level=info msg="StopPodSandbox for \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\"" Oct 8 19:53:24.182740 containerd[1461]: time="2024-10-08T19:53:24.182361919Z" level=info msg="Ensure that sandbox 772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f in task-service has been cleanup successfully" Oct 8 19:53:24.183306 kubelet[2668]: I1008 19:53:24.183275 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:24.184086 containerd[1461]: time="2024-10-08T19:53:24.184055222Z" level=info msg="StopPodSandbox for \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\"" Oct 8 19:53:24.184342 containerd[1461]: time="2024-10-08T19:53:24.184316720Z" level=info msg="Ensure that sandbox 8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0 in task-service has been cleanup successfully" Oct 8 19:53:24.215133 containerd[1461]: time="2024-10-08T19:53:24.215073993Z" level=error msg="StopPodSandbox for \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\" failed" error="failed to destroy network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:24.215703 kubelet[2668]: E1008 19:53:24.215519 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:24.215703 kubelet[2668]: E1008 19:53:24.215588 2668 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25"} Oct 8 19:53:24.215703 kubelet[2668]: E1008 19:53:24.215649 2668 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ef45011-2762-475b-836a-fed77ffc1a96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:24.215703 kubelet[2668]: E1008 19:53:24.215673 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ef45011-2762-475b-836a-fed77ffc1a96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75665d5dcd-2wtg4" podUID="2ef45011-2762-475b-836a-fed77ffc1a96" Oct 8 19:53:24.222394 containerd[1461]: time="2024-10-08T19:53:24.222333090Z" level=error msg="StopPodSandbox for \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\" failed" error="failed to destroy network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:24.222765 kubelet[2668]: E1008 19:53:24.222608 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:24.222765 kubelet[2668]: E1008 19:53:24.222666 2668 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f"} Oct 8 19:53:24.222765 kubelet[2668]: E1008 19:53:24.222710 2668 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:24.222765 kubelet[2668]: E1008 19:53:24.222735 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-84xkt" podUID="57d1dbbc-3c1e-49e7-917f-8d2167c92f3d" Oct 8 19:53:24.223633 containerd[1461]: time="2024-10-08T19:53:24.223580164Z" level=error msg="StopPodSandbox for \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\" failed" error="failed to destroy network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:24.223758 kubelet[2668]: E1008 19:53:24.223729 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Oct 8 19:53:24.223758 kubelet[2668]: E1008 19:53:24.223758 2668 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab"} Oct 8 19:53:24.223869 kubelet[2668]: E1008 19:53:24.223777 2668 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2be15c5-243c-4ea8-a2a3-616319911d83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:24.223869 kubelet[2668]: E1008 19:53:24.223794 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2be15c5-243c-4ea8-a2a3-616319911d83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-46ktg" podUID="c2be15c5-243c-4ea8-a2a3-616319911d83" Oct 8 19:53:24.232851 containerd[1461]: time="2024-10-08T19:53:24.232809362Z" level=error msg="StopPodSandbox for \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\" failed" error="failed to destroy network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:53:24.233069 kubelet[2668]: E1008 19:53:24.233021 2668 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:24.233129 kubelet[2668]: E1008 19:53:24.233075 2668 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0"} Oct 8 19:53:24.233129 kubelet[2668]: E1008 19:53:24.233111 2668 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:53:24.233228 kubelet[2668]: E1008 19:53:24.233141 2668 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pcbsv" podUID="c1fbc285-e14b-4647-ab1f-3d69ffb9be3b" Oct 8 19:53:27.165680 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:34406.service - OpenSSH per-connection server daemon (10.0.0.1:34406). Oct 8 19:53:27.217634 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 34406 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:27.230155 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:27.235751 systemd-logind[1446]: New session 11 of user core. Oct 8 19:53:27.243221 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:53:27.387211 sshd[3690]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:27.397178 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:34406.service: Deactivated successfully. Oct 8 19:53:27.400714 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:53:27.402886 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:53:27.414203 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:34412.service - OpenSSH per-connection server daemon (10.0.0.1:34412). Oct 8 19:53:27.415998 systemd-logind[1446]: Removed session 11. Oct 8 19:53:27.450342 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:27.452746 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:27.458668 systemd-logind[1446]: New session 12 of user core. Oct 8 19:53:27.463141 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:53:27.765343 sshd[3705]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:27.773439 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:34412.service: Deactivated successfully. Oct 8 19:53:27.776091 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:53:27.778508 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:53:27.788359 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:34416.service - OpenSSH per-connection server daemon (10.0.0.1:34416). Oct 8 19:53:27.789839 systemd-logind[1446]: Removed session 12. Oct 8 19:53:27.831938 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 34416 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:27.833909 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:27.838601 systemd-logind[1446]: New session 13 of user core. Oct 8 19:53:27.842037 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:53:28.039384 sshd[3717]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:28.044221 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:34416.service: Deactivated successfully. Oct 8 19:53:28.046835 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:53:28.048533 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:53:28.049729 systemd-logind[1446]: Removed session 13. Oct 8 19:53:28.999489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828775852.mount: Deactivated successfully. Oct 8 19:53:29.943114 containerd[1461]: time="2024-10-08T19:53:29.943021767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:29.943957 containerd[1461]: time="2024-10-08T19:53:29.943881981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 19:53:29.945207 containerd[1461]: time="2024-10-08T19:53:29.945160041Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:29.947293 containerd[1461]: time="2024-10-08T19:53:29.947247268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:29.947864 containerd[1461]: time="2024-10-08T19:53:29.947829064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.77004097s" Oct 8 19:53:29.947864 containerd[1461]: time="2024-10-08T19:53:29.947862498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 19:53:29.957622 containerd[1461]: time="2024-10-08T19:53:29.957554369Z" level=info msg="CreateContainer within sandbox \"3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:53:29.978812 containerd[1461]: time="2024-10-08T19:53:29.978735559Z" level=info msg="CreateContainer within sandbox \"3c215f5d3f32ed130db2425a195d92db7bf16cb27d1abb170597b47a3b30f3b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"44df7e20419ac1b7b47ad41ee2b3809377f74f8f059dd6f1e10ba5aea2c0f1bf\"" Oct 8 19:53:29.979534 containerd[1461]: time="2024-10-08T19:53:29.979469304Z" level=info msg="StartContainer for \"44df7e20419ac1b7b47ad41ee2b3809377f74f8f059dd6f1e10ba5aea2c0f1bf\"" Oct 8 19:53:30.079251 systemd[1]: Started cri-containerd-44df7e20419ac1b7b47ad41ee2b3809377f74f8f059dd6f1e10ba5aea2c0f1bf.scope - libcontainer container 44df7e20419ac1b7b47ad41ee2b3809377f74f8f059dd6f1e10ba5aea2c0f1bf. Oct 8 19:53:30.473356 containerd[1461]: time="2024-10-08T19:53:30.473283752Z" level=info msg="StartContainer for \"44df7e20419ac1b7b47ad41ee2b3809377f74f8f059dd6f1e10ba5aea2c0f1bf\" returns successfully" Oct 8 19:53:30.484951 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:53:30.485088 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:53:31.478900 kubelet[2668]: E1008 19:53:31.478862 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:31.511618 kubelet[2668]: I1008 19:53:31.510754 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lgxwr" podStartSLOduration=2.934596479 podStartE2EDuration="26.510732295s" podCreationTimestamp="2024-10-08 19:53:05 +0000 UTC" firstStartedPulling="2024-10-08 19:53:06.372607867 +0000 UTC m=+26.425749537" lastFinishedPulling="2024-10-08 19:53:29.948743683 +0000 UTC m=+50.001885353" observedRunningTime="2024-10-08 19:53:31.510136774 +0000 UTC m=+51.563278445" watchObservedRunningTime="2024-10-08 19:53:31.510732295 +0000 UTC m=+51.563873965" Oct 8 19:53:32.031959 kernel: bpftool[3946]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:53:32.327445 systemd-networkd[1394]: vxlan.calico: Link UP Oct 8 19:53:32.327471 systemd-networkd[1394]: vxlan.calico: Gained carrier Oct 8 19:53:32.480711 kubelet[2668]: E1008 19:53:32.480668 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:33.060592 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:43256.service - OpenSSH per-connection server daemon (10.0.0.1:43256). Oct 8 19:53:33.105649 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 43256 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:33.107671 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:33.112298 systemd-logind[1446]: New session 14 of user core. Oct 8 19:53:33.122087 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:53:33.242512 sshd[4043]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:33.246897 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:43256.service: Deactivated successfully. Oct 8 19:53:33.249252 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:53:33.250009 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:53:33.251066 systemd-logind[1446]: Removed session 14. Oct 8 19:53:34.300200 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Oct 8 19:53:35.039317 containerd[1461]: time="2024-10-08T19:53:35.039254386Z" level=info msg="StopPodSandbox for \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\"" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.203 [INFO][4079] k8s.go 608: Cleaning up netns ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.203 [INFO][4079] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" iface="eth0" netns="/var/run/netns/cni-816e171c-f163-c919-a03e-9bfb78ada1c7" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.204 [INFO][4079] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" iface="eth0" netns="/var/run/netns/cni-816e171c-f163-c919-a03e-9bfb78ada1c7" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.204 [INFO][4079] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" iface="eth0" netns="/var/run/netns/cni-816e171c-f163-c919-a03e-9bfb78ada1c7" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.204 [INFO][4079] k8s.go 615: Releasing IP address(es) ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.204 [INFO][4079] utils.go 188: Calico CNI releasing IP address ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.276 [INFO][4087] ipam_plugin.go 417: Releasing address using handleID ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.276 [INFO][4087] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.276 [INFO][4087] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.283 [WARNING][4087] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.283 [INFO][4087] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.284 [INFO][4087] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:35.289211 containerd[1461]: 2024-10-08 19:53:35.286 [INFO][4079] k8s.go 621: Teardown processing complete. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:35.290305 containerd[1461]: time="2024-10-08T19:53:35.289411944Z" level=info msg="TearDown network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\" successfully" Oct 8 19:53:35.290305 containerd[1461]: time="2024-10-08T19:53:35.289447302Z" level=info msg="StopPodSandbox for \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\" returns successfully" Oct 8 19:53:35.290396 kubelet[2668]: E1008 19:53:35.289953 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:35.291587 containerd[1461]: time="2024-10-08T19:53:35.291554942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pcbsv,Uid:c1fbc285-e14b-4647-ab1f-3d69ffb9be3b,Namespace:kube-system,Attempt:1,}" Oct 8 19:53:35.293142 systemd[1]: run-netns-cni\x2d816e171c\x2df163\x2dc919\x2da03e\x2d9bfb78ada1c7.mount: Deactivated successfully. Oct 8 19:53:35.564117 systemd-networkd[1394]: calia34d0c6b136: Link UP Oct 8 19:53:35.564433 systemd-networkd[1394]: calia34d0c6b136: Gained carrier Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.491 [INFO][4095] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0 coredns-7db6d8ff4d- kube-system c1fbc285-e14b-4647-ab1f-3d69ffb9be3b 885 0 2024-10-08 19:52:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-pcbsv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia34d0c6b136 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.492 [INFO][4095] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.522 [INFO][4108] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" HandleID="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.530 [INFO][4108] ipam_plugin.go 270: Auto assigning IP ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" HandleID="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003660a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-pcbsv", "timestamp":"2024-10-08 19:53:35.522070199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.530 [INFO][4108] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.530 [INFO][4108] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.530 [INFO][4108] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.532 [INFO][4108] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.537 [INFO][4108] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.542 [INFO][4108] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.544 [INFO][4108] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.546 [INFO][4108] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.546 [INFO][4108] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.547 [INFO][4108] ipam.go 1685: Creating new handle: k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719 Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.553 [INFO][4108] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.558 [INFO][4108] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.558 [INFO][4108] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" host="localhost" Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.558 [INFO][4108] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:35.595479 containerd[1461]: 2024-10-08 19:53:35.558 [INFO][4108] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" HandleID="k8s-pod-network.0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.597085 containerd[1461]: 2024-10-08 19:53:35.561 [INFO][4095] k8s.go 386: Populated endpoint ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-pcbsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34d0c6b136", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:35.597085 containerd[1461]: 2024-10-08 19:53:35.561 [INFO][4095] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.597085 containerd[1461]: 2024-10-08 19:53:35.561 [INFO][4095] dataplane_linux.go 68: Setting the host side veth name to calia34d0c6b136 ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.597085 containerd[1461]: 2024-10-08 19:53:35.564 [INFO][4095] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.597085 containerd[1461]: 2024-10-08 19:53:35.564 [INFO][4095] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719", Pod:"coredns-7db6d8ff4d-pcbsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34d0c6b136", MAC:"ca:db:dc:45:a2:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:35.597085 containerd[1461]: 2024-10-08 19:53:35.591 [INFO][4095] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pcbsv" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:35.632883 containerd[1461]: time="2024-10-08T19:53:35.632442084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:35.632883 containerd[1461]: time="2024-10-08T19:53:35.632639809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:35.632883 containerd[1461]: time="2024-10-08T19:53:35.632700825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:35.632883 containerd[1461]: time="2024-10-08T19:53:35.632789473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:35.663196 systemd[1]: Started cri-containerd-0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719.scope - libcontainer container 0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719. Oct 8 19:53:35.678657 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:53:35.705571 containerd[1461]: time="2024-10-08T19:53:35.705461564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pcbsv,Uid:c1fbc285-e14b-4647-ab1f-3d69ffb9be3b,Namespace:kube-system,Attempt:1,} returns sandbox id \"0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719\"" Oct 8 19:53:35.706792 kubelet[2668]: E1008 19:53:35.706765 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:35.709200 containerd[1461]: time="2024-10-08T19:53:35.709128585Z" level=info msg="CreateContainer within sandbox \"0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:53:35.735603 containerd[1461]: time="2024-10-08T19:53:35.735542969Z" level=info msg="CreateContainer within sandbox \"0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7f9fbe9ff2f633b85fbf75c215224a19e23967e154953895f82b2bfab95e011\"" Oct 8 19:53:35.736300 containerd[1461]: time="2024-10-08T19:53:35.736194286Z" level=info msg="StartContainer for \"d7f9fbe9ff2f633b85fbf75c215224a19e23967e154953895f82b2bfab95e011\"" Oct 8 19:53:35.768146 systemd[1]: Started cri-containerd-d7f9fbe9ff2f633b85fbf75c215224a19e23967e154953895f82b2bfab95e011.scope - libcontainer container d7f9fbe9ff2f633b85fbf75c215224a19e23967e154953895f82b2bfab95e011. Oct 8 19:53:35.800845 containerd[1461]: time="2024-10-08T19:53:35.800788345Z" level=info msg="StartContainer for \"d7f9fbe9ff2f633b85fbf75c215224a19e23967e154953895f82b2bfab95e011\" returns successfully" Oct 8 19:53:36.494388 kubelet[2668]: E1008 19:53:36.494286 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:36.506279 kubelet[2668]: I1008 19:53:36.506193 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pcbsv" podStartSLOduration=40.506165072 podStartE2EDuration="40.506165072s" podCreationTimestamp="2024-10-08 19:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:53:36.503066432 +0000 UTC m=+56.556208102" watchObservedRunningTime="2024-10-08 19:53:36.506165072 +0000 UTC m=+56.559306752" Oct 8 19:53:37.040101 containerd[1461]: time="2024-10-08T19:53:37.039993645Z" level=info msg="StopPodSandbox for \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\"" Oct 8 19:53:37.116653 systemd-networkd[1394]: calia34d0c6b136: Gained IPv6LL Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.130 [INFO][4229] k8s.go 608: Cleaning up netns ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.131 [INFO][4229] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" iface="eth0" netns="/var/run/netns/cni-638ad6cd-9184-8007-4714-2f625bb93e47" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.131 [INFO][4229] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" iface="eth0" netns="/var/run/netns/cni-638ad6cd-9184-8007-4714-2f625bb93e47" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.132 [INFO][4229] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" iface="eth0" netns="/var/run/netns/cni-638ad6cd-9184-8007-4714-2f625bb93e47" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.132 [INFO][4229] k8s.go 615: Releasing IP address(es) ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.132 [INFO][4229] utils.go 188: Calico CNI releasing IP address ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.163 [INFO][4236] ipam_plugin.go 417: Releasing address using handleID ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.163 [INFO][4236] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.163 [INFO][4236] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.174 [WARNING][4236] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.174 [INFO][4236] ipam_plugin.go 445: Releasing address using workloadID ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.176 [INFO][4236] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:37.183756 containerd[1461]: 2024-10-08 19:53:37.180 [INFO][4229] k8s.go 621: Teardown processing complete. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:37.184337 containerd[1461]: time="2024-10-08T19:53:37.184028708Z" level=info msg="TearDown network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\" successfully" Oct 8 19:53:37.184337 containerd[1461]: time="2024-10-08T19:53:37.184069957Z" level=info msg="StopPodSandbox for \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\" returns successfully" Oct 8 19:53:37.186070 containerd[1461]: time="2024-10-08T19:53:37.186014276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75665d5dcd-2wtg4,Uid:2ef45011-2762-475b-836a-fed77ffc1a96,Namespace:calico-system,Attempt:1,}" Oct 8 19:53:37.188363 systemd[1]: run-netns-cni\x2d638ad6cd\x2d9184\x2d8007\x2d4714\x2d2f625bb93e47.mount: Deactivated successfully. Oct 8 19:53:37.360317 systemd-networkd[1394]: cali1c4cf17819c: Link UP Oct 8 19:53:37.361446 systemd-networkd[1394]: cali1c4cf17819c: Gained carrier Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.263 [INFO][4245] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0 calico-kube-controllers-75665d5dcd- calico-system 2ef45011-2762-475b-836a-fed77ffc1a96 910 0 2024-10-08 19:53:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75665d5dcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75665d5dcd-2wtg4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c4cf17819c [] []}} ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.263 [INFO][4245] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.306 [INFO][4258] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" HandleID="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.315 [INFO][4258] ipam_plugin.go 270: Auto assigning IP ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" HandleID="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295c70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75665d5dcd-2wtg4", "timestamp":"2024-10-08 19:53:37.306389161 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.315 [INFO][4258] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.315 [INFO][4258] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.315 [INFO][4258] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.317 [INFO][4258] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.324 [INFO][4258] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.329 [INFO][4258] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.331 [INFO][4258] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.333 [INFO][4258] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.333 [INFO][4258] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.336 [INFO][4258] ipam.go 1685: Creating new handle: k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1 Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.340 [INFO][4258] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.349 [INFO][4258] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.349 [INFO][4258] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" host="localhost" Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.349 [INFO][4258] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:37.380503 containerd[1461]: 2024-10-08 19:53:37.349 [INFO][4258] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" HandleID="k8s-pod-network.4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.382095 containerd[1461]: 2024-10-08 19:53:37.354 [INFO][4245] k8s.go 386: Populated endpoint ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0", GenerateName:"calico-kube-controllers-75665d5dcd-", Namespace:"calico-system", SelfLink:"", UID:"2ef45011-2762-475b-836a-fed77ffc1a96", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75665d5dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75665d5dcd-2wtg4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c4cf17819c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:37.382095 containerd[1461]: 2024-10-08 19:53:37.354 [INFO][4245] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.382095 containerd[1461]: 2024-10-08 19:53:37.354 [INFO][4245] dataplane_linux.go 68: Setting the host side veth name to cali1c4cf17819c ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.382095 containerd[1461]: 2024-10-08 19:53:37.362 [INFO][4245] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.382095 containerd[1461]: 2024-10-08 19:53:37.363 [INFO][4245] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0", GenerateName:"calico-kube-controllers-75665d5dcd-", Namespace:"calico-system", SelfLink:"", UID:"2ef45011-2762-475b-836a-fed77ffc1a96", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75665d5dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1", Pod:"calico-kube-controllers-75665d5dcd-2wtg4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c4cf17819c", MAC:"a2:48:31:1a:e8:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:37.382095 containerd[1461]: 2024-10-08 19:53:37.374 [INFO][4245] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1" Namespace="calico-system" Pod="calico-kube-controllers-75665d5dcd-2wtg4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:37.419455 containerd[1461]: time="2024-10-08T19:53:37.419226876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:37.420083 containerd[1461]: time="2024-10-08T19:53:37.420002057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:37.420184 containerd[1461]: time="2024-10-08T19:53:37.420063354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:37.420370 containerd[1461]: time="2024-10-08T19:53:37.420249537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:37.457419 systemd[1]: Started cri-containerd-4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1.scope - libcontainer container 4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1. Oct 8 19:53:37.474967 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:53:37.504035 kubelet[2668]: E1008 19:53:37.503954 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:37.519807 containerd[1461]: time="2024-10-08T19:53:37.519571129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75665d5dcd-2wtg4,Uid:2ef45011-2762-475b-836a-fed77ffc1a96,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1\"" Oct 8 19:53:37.525082 containerd[1461]: time="2024-10-08T19:53:37.525043382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:53:38.266654 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:43408.service - OpenSSH per-connection server daemon (10.0.0.1:43408). Oct 8 19:53:38.378588 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 43408 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:38.384241 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:38.401604 systemd-logind[1446]: New session 15 of user core. Oct 8 19:53:38.410654 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:53:38.518508 kubelet[2668]: E1008 19:53:38.518347 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:38.585409 sshd[4330]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:38.590327 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:43408.service: Deactivated successfully. Oct 8 19:53:38.593186 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:53:38.594267 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:53:38.596057 systemd-logind[1446]: Removed session 15. Oct 8 19:53:39.042357 containerd[1461]: time="2024-10-08T19:53:39.040386980Z" level=info msg="StopPodSandbox for \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\"" Oct 8 19:53:39.042357 containerd[1461]: time="2024-10-08T19:53:39.041464775Z" level=info msg="StopPodSandbox for \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\"" Oct 8 19:53:39.230279 systemd-networkd[1394]: cali1c4cf17819c: Gained IPv6LL Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.211 [INFO][4377] k8s.go 608: Cleaning up netns ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.211 [INFO][4377] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" iface="eth0" netns="/var/run/netns/cni-17cc0cb0-590c-8c07-9492-968758016fc6" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.212 [INFO][4377] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" iface="eth0" netns="/var/run/netns/cni-17cc0cb0-590c-8c07-9492-968758016fc6" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.213 [INFO][4377] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" iface="eth0" netns="/var/run/netns/cni-17cc0cb0-590c-8c07-9492-968758016fc6" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.214 [INFO][4377] k8s.go 615: Releasing IP address(es) ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.214 [INFO][4377] utils.go 188: Calico CNI releasing IP address ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.278 [INFO][4390] ipam_plugin.go 417: Releasing address using handleID ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" HandleID="k8s-pod-network.99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Workload="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.279 [INFO][4390] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.279 [INFO][4390] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.298 [WARNING][4390] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" HandleID="k8s-pod-network.99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Workload="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.299 [INFO][4390] ipam_plugin.go 445: Releasing address using workloadID ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" HandleID="k8s-pod-network.99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Workload="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.301 [INFO][4390] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:39.315400 containerd[1461]: 2024-10-08 19:53:39.307 [INFO][4377] k8s.go 621: Teardown processing complete. ContainerID="99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab" Oct 8 19:53:39.318073 containerd[1461]: time="2024-10-08T19:53:39.318014979Z" level=info msg="TearDown network for sandbox \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\" successfully" Oct 8 19:53:39.318073 containerd[1461]: time="2024-10-08T19:53:39.318069302Z" level=info msg="StopPodSandbox for \"99d9557a2b2a5f289411e5a347ffaf53fcff04de106ef8ea1e422b9f972433ab\" returns successfully" Oct 8 19:53:39.320691 containerd[1461]: time="2024-10-08T19:53:39.320644746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46ktg,Uid:c2be15c5-243c-4ea8-a2a3-616319911d83,Namespace:calico-system,Attempt:1,}" Oct 8 19:53:39.320844 systemd[1]: run-netns-cni\x2d17cc0cb0\x2d590c\x2d8c07\x2d9492\x2d968758016fc6.mount: Deactivated successfully. Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.258 [INFO][4376] k8s.go 608: Cleaning up netns ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.258 [INFO][4376] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" iface="eth0" netns="/var/run/netns/cni-0bb2ff30-b3f4-0c6a-d9c9-6c78ad78becb" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.258 [INFO][4376] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" iface="eth0" netns="/var/run/netns/cni-0bb2ff30-b3f4-0c6a-d9c9-6c78ad78becb" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.259 [INFO][4376] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" iface="eth0" netns="/var/run/netns/cni-0bb2ff30-b3f4-0c6a-d9c9-6c78ad78becb" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.259 [INFO][4376] k8s.go 615: Releasing IP address(es) ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.259 [INFO][4376] utils.go 188: Calico CNI releasing IP address ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.311 [INFO][4403] ipam_plugin.go 417: Releasing address using handleID ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.311 [INFO][4403] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.311 [INFO][4403] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.320 [WARNING][4403] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.320 [INFO][4403] ipam_plugin.go 445: Releasing address using workloadID ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.321 [INFO][4403] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:39.327811 containerd[1461]: 2024-10-08 19:53:39.324 [INFO][4376] k8s.go 621: Teardown processing complete. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:39.330905 systemd[1]: run-netns-cni\x2d0bb2ff30\x2db3f4\x2d0c6a\x2dd9c9\x2d6c78ad78becb.mount: Deactivated successfully. Oct 8 19:53:39.331356 containerd[1461]: time="2024-10-08T19:53:39.331044709Z" level=info msg="TearDown network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\" successfully" Oct 8 19:53:39.331356 containerd[1461]: time="2024-10-08T19:53:39.331073845Z" level=info msg="StopPodSandbox for \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\" returns successfully" Oct 8 19:53:39.331631 kubelet[2668]: E1008 19:53:39.331585 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:39.334890 containerd[1461]: time="2024-10-08T19:53:39.334850817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84xkt,Uid:57d1dbbc-3c1e-49e7-917f-8d2167c92f3d,Namespace:kube-system,Attempt:1,}" Oct 8 19:53:39.517716 systemd-networkd[1394]: cali1697cb3cc3e: Link UP Oct 8 19:53:39.517979 systemd-networkd[1394]: cali1697cb3cc3e: Gained carrier Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.404 [INFO][4425] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0 coredns-7db6d8ff4d- kube-system 57d1dbbc-3c1e-49e7-917f-8d2167c92f3d 925 0 2024-10-08 19:52:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-84xkt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1697cb3cc3e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.404 [INFO][4425] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.441 [INFO][4441] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" HandleID="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.459 [INFO][4441] ipam_plugin.go 270: Auto assigning IP ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" HandleID="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e4db0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-84xkt", "timestamp":"2024-10-08 19:53:39.441070487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.459 [INFO][4441] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.459 [INFO][4441] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.459 [INFO][4441] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.464 [INFO][4441] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.474 [INFO][4441] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.481 [INFO][4441] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.483 [INFO][4441] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.486 [INFO][4441] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.486 [INFO][4441] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.489 [INFO][4441] ipam.go 1685: Creating new handle: k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.496 [INFO][4441] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.504 [INFO][4441] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.505 [INFO][4441] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" host="localhost" Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.505 [INFO][4441] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:39.536023 containerd[1461]: 2024-10-08 19:53:39.505 [INFO][4441] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" HandleID="k8s-pod-network.1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.536832 containerd[1461]: 2024-10-08 19:53:39.510 [INFO][4425] k8s.go 386: Populated endpoint ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-84xkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1697cb3cc3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:39.536832 containerd[1461]: 2024-10-08 19:53:39.510 [INFO][4425] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.536832 containerd[1461]: 2024-10-08 19:53:39.510 [INFO][4425] dataplane_linux.go 68: Setting the host side veth name to cali1697cb3cc3e ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.536832 containerd[1461]: 2024-10-08 19:53:39.516 [INFO][4425] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.536832 containerd[1461]: 2024-10-08 19:53:39.516 [INFO][4425] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c", Pod:"coredns-7db6d8ff4d-84xkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1697cb3cc3e", MAC:"86:38:76:9b:1d:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:39.536832 containerd[1461]: 2024-10-08 19:53:39.529 [INFO][4425] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-84xkt" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:39.582195 systemd-networkd[1394]: cali0da8af8f6b4: Link UP Oct 8 19:53:39.583730 systemd-networkd[1394]: cali0da8af8f6b4: Gained carrier Oct 8 19:53:39.598508 containerd[1461]: time="2024-10-08T19:53:39.598394154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:39.598824 containerd[1461]: time="2024-10-08T19:53:39.598464047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:39.598824 containerd[1461]: time="2024-10-08T19:53:39.598482662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:39.598824 containerd[1461]: time="2024-10-08T19:53:39.598581189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:39.630234 systemd[1]: Started cri-containerd-1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c.scope - libcontainer container 1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c. Oct 8 19:53:39.647745 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:53:39.677479 containerd[1461]: time="2024-10-08T19:53:39.677416596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-84xkt,Uid:57d1dbbc-3c1e-49e7-917f-8d2167c92f3d,Namespace:kube-system,Attempt:1,} returns sandbox id \"1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c\"" Oct 8 19:53:39.678489 kubelet[2668]: E1008 19:53:39.678441 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:39.684268 containerd[1461]: time="2024-10-08T19:53:39.684205752Z" level=info msg="CreateContainer within sandbox \"1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.404 [INFO][4412] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--46ktg-eth0 csi-node-driver- calico-system c2be15c5-243c-4ea8-a2a3-616319911d83 924 0 2024-10-08 19:53:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-46ktg eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0da8af8f6b4 [] []}} ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.404 [INFO][4412] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.458 [INFO][4442] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" HandleID="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Workload="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.468 [INFO][4442] ipam_plugin.go 270: Auto assigning IP ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" HandleID="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Workload="localhost-k8s-csi--node--driver--46ktg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050fc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-46ktg", "timestamp":"2024-10-08 19:53:39.458933314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.468 [INFO][4442] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.505 [INFO][4442] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.506 [INFO][4442] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.510 [INFO][4442] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.519 [INFO][4442] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.534 [INFO][4442] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.538 [INFO][4442] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.541 [INFO][4442] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.541 [INFO][4442] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.543 [INFO][4442] ipam.go 1685: Creating new handle: k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5 Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.552 [INFO][4442] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.560 [INFO][4442] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.560 [INFO][4442] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" host="localhost" Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.560 [INFO][4442] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:39.698675 containerd[1461]: 2024-10-08 19:53:39.560 [INFO][4442] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" HandleID="k8s-pod-network.ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Workload="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.699511 containerd[1461]: 2024-10-08 19:53:39.575 [INFO][4412] k8s.go 386: Populated endpoint ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--46ktg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2be15c5-243c-4ea8-a2a3-616319911d83", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-46ktg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0da8af8f6b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:39.699511 containerd[1461]: 2024-10-08 19:53:39.576 [INFO][4412] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.699511 containerd[1461]: 2024-10-08 19:53:39.576 [INFO][4412] dataplane_linux.go 68: Setting the host side veth name to cali0da8af8f6b4 ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.699511 containerd[1461]: 2024-10-08 19:53:39.585 [INFO][4412] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:39.699511 containerd[1461]: 2024-10-08 19:53:39.585 [INFO][4412] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--46ktg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2be15c5-243c-4ea8-a2a3-616319911d83", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5", Pod:"csi-node-driver-46ktg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0da8af8f6b4", MAC:"a2:28:82:71:03:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:39.699511 containerd[1461]: 2024-10-08 19:53:39.694 [INFO][4412] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5" Namespace="calico-system" Pod="csi-node-driver-46ktg" WorkloadEndpoint="localhost-k8s-csi--node--driver--46ktg-eth0" Oct 8 19:53:40.024174 containerd[1461]: time="2024-10-08T19:53:40.023995357Z" level=info msg="StopPodSandbox for \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\"" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.197 [WARNING][4541] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0", GenerateName:"calico-kube-controllers-75665d5dcd-", Namespace:"calico-system", SelfLink:"", UID:"2ef45011-2762-475b-836a-fed77ffc1a96", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75665d5dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1", Pod:"calico-kube-controllers-75665d5dcd-2wtg4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c4cf17819c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.197 [INFO][4541] k8s.go 608: Cleaning up netns ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.197 [INFO][4541] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" iface="eth0" netns="" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.197 [INFO][4541] k8s.go 615: Releasing IP address(es) ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.197 [INFO][4541] utils.go 188: Calico CNI releasing IP address ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.220 [INFO][4550] ipam_plugin.go 417: Releasing address using handleID ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.220 [INFO][4550] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.220 [INFO][4550] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.225 [WARNING][4550] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.225 [INFO][4550] ipam_plugin.go 445: Releasing address using workloadID ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.226 [INFO][4550] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:40.231086 containerd[1461]: 2024-10-08 19:53:40.228 [INFO][4541] k8s.go 621: Teardown processing complete. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.232015 containerd[1461]: time="2024-10-08T19:53:40.231955129Z" level=info msg="TearDown network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\" successfully" Oct 8 19:53:40.232015 containerd[1461]: time="2024-10-08T19:53:40.231991298Z" level=info msg="StopPodSandbox for \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\" returns successfully" Oct 8 19:53:40.252409 containerd[1461]: time="2024-10-08T19:53:40.252354829Z" level=info msg="RemovePodSandbox for \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\"" Oct 8 19:53:40.254545 containerd[1461]: time="2024-10-08T19:53:40.254517009Z" level=info msg="Forcibly stopping sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\"" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.287 [WARNING][4573] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0", GenerateName:"calico-kube-controllers-75665d5dcd-", Namespace:"calico-system", SelfLink:"", UID:"2ef45011-2762-475b-836a-fed77ffc1a96", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75665d5dcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1", Pod:"calico-kube-controllers-75665d5dcd-2wtg4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c4cf17819c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.288 [INFO][4573] k8s.go 608: Cleaning up netns ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.288 [INFO][4573] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" iface="eth0" netns="" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.288 [INFO][4573] k8s.go 615: Releasing IP address(es) ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.288 [INFO][4573] utils.go 188: Calico CNI releasing IP address ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.401 [INFO][4580] ipam_plugin.go 417: Releasing address using handleID ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.401 [INFO][4580] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.402 [INFO][4580] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.408 [WARNING][4580] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.408 [INFO][4580] ipam_plugin.go 445: Releasing address using workloadID ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" HandleID="k8s-pod-network.87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Workload="localhost-k8s-calico--kube--controllers--75665d5dcd--2wtg4-eth0" Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.409 [INFO][4580] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:40.415114 containerd[1461]: 2024-10-08 19:53:40.412 [INFO][4573] k8s.go 621: Teardown processing complete. ContainerID="87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25" Oct 8 19:53:40.418756 containerd[1461]: time="2024-10-08T19:53:40.415179239Z" level=info msg="TearDown network for sandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\" successfully" Oct 8 19:53:40.435520 containerd[1461]: time="2024-10-08T19:53:40.435390001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:53:40.435758 containerd[1461]: time="2024-10-08T19:53:40.435505960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:53:40.435758 containerd[1461]: time="2024-10-08T19:53:40.435527521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:40.436490 containerd[1461]: time="2024-10-08T19:53:40.436396589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:53:40.463141 systemd[1]: Started cri-containerd-ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5.scope - libcontainer container ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5. Oct 8 19:53:40.478457 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:53:40.490670 containerd[1461]: time="2024-10-08T19:53:40.490602188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-46ktg,Uid:c2be15c5-243c-4ea8-a2a3-616319911d83,Namespace:calico-system,Attempt:1,} returns sandbox id \"ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5\"" Oct 8 19:53:40.559310 containerd[1461]: time="2024-10-08T19:53:40.559237297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:40.561980 containerd[1461]: time="2024-10-08T19:53:40.561875930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 19:53:40.564974 containerd[1461]: time="2024-10-08T19:53:40.564803891Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:40.565302 containerd[1461]: time="2024-10-08T19:53:40.565209541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:53:40.565348 containerd[1461]: time="2024-10-08T19:53:40.565314199Z" level=info msg="RemovePodSandbox \"87823b8fa7b592283ba24f8fc8b1f027e85ffabee37a280400441c406e845b25\" returns successfully" Oct 8 19:53:40.566183 containerd[1461]: time="2024-10-08T19:53:40.566130096Z" level=info msg="StopPodSandbox for \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\"" Oct 8 19:53:40.569817 containerd[1461]: time="2024-10-08T19:53:40.569761912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:40.570578 containerd[1461]: time="2024-10-08T19:53:40.570538676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.045439418s" Oct 8 19:53:40.570632 containerd[1461]: time="2024-10-08T19:53:40.570581787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 19:53:40.572232 containerd[1461]: time="2024-10-08T19:53:40.571974167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:53:40.576062 containerd[1461]: time="2024-10-08T19:53:40.575413308Z" level=info msg="CreateContainer within sandbox \"1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8296b057268f9c0a8bcc4b4a441edc37586fc0458d811e1c62c3f6b0b3949fc7\"" Oct 8 19:53:40.576371 containerd[1461]: time="2024-10-08T19:53:40.576222803Z" level=info msg="StartContainer for \"8296b057268f9c0a8bcc4b4a441edc37586fc0458d811e1c62c3f6b0b3949fc7\"" Oct 8 19:53:40.590042 containerd[1461]: time="2024-10-08T19:53:40.589935594Z" level=info msg="CreateContainer within sandbox \"4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:53:40.612281 systemd[1]: Started cri-containerd-8296b057268f9c0a8bcc4b4a441edc37586fc0458d811e1c62c3f6b0b3949fc7.scope - libcontainer container 8296b057268f9c0a8bcc4b4a441edc37586fc0458d811e1c62c3f6b0b3949fc7. Oct 8 19:53:40.620947 containerd[1461]: time="2024-10-08T19:53:40.620805276Z" level=info msg="CreateContainer within sandbox \"4e45a951be88af329cc8b92a08828029b56e3c9a37841ced4bd9726e862e8ee1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"767a85aef01737d29e034a7f2b4b47d24eeb5e96d23d1f954ff44b82f3e8e053\"" Oct 8 19:53:40.622168 containerd[1461]: time="2024-10-08T19:53:40.622130678Z" level=info msg="StartContainer for \"767a85aef01737d29e034a7f2b4b47d24eeb5e96d23d1f954ff44b82f3e8e053\"" Oct 8 19:53:40.672331 systemd[1]: Started cri-containerd-767a85aef01737d29e034a7f2b4b47d24eeb5e96d23d1f954ff44b82f3e8e053.scope - libcontainer container 767a85aef01737d29e034a7f2b4b47d24eeb5e96d23d1f954ff44b82f3e8e053. Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.623 [WARNING][4643] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719", Pod:"coredns-7db6d8ff4d-pcbsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34d0c6b136", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.624 [INFO][4643] k8s.go 608: Cleaning up netns ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.624 [INFO][4643] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" iface="eth0" netns="" Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.624 [INFO][4643] k8s.go 615: Releasing IP address(es) ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.624 [INFO][4643] utils.go 188: Calico CNI releasing IP address ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.657 [INFO][4680] ipam_plugin.go 417: Releasing address using handleID ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.657 [INFO][4680] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.657 [INFO][4680] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.665 [WARNING][4680] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.665 [INFO][4680] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.667 [INFO][4680] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:40.679762 containerd[1461]: 2024-10-08 19:53:40.676 [INFO][4643] k8s.go 621: Teardown processing complete. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.681547 containerd[1461]: time="2024-10-08T19:53:40.680090366Z" level=info msg="TearDown network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\" successfully" Oct 8 19:53:40.681547 containerd[1461]: time="2024-10-08T19:53:40.680159136Z" level=info msg="StopPodSandbox for \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\" returns successfully" Oct 8 19:53:40.681547 containerd[1461]: time="2024-10-08T19:53:40.681413785Z" level=info msg="RemovePodSandbox for \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\"" Oct 8 19:53:40.683479 containerd[1461]: time="2024-10-08T19:53:40.682093814Z" level=info msg="Forcibly stopping sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\"" Oct 8 19:53:40.683979 containerd[1461]: time="2024-10-08T19:53:40.683908465Z" level=info msg="StartContainer for \"8296b057268f9c0a8bcc4b4a441edc37586fc0458d811e1c62c3f6b0b3949fc7\" returns successfully" Oct 8 19:53:40.700162 systemd-networkd[1394]: cali1697cb3cc3e: Gained IPv6LL Oct 8 19:53:40.749414 containerd[1461]: time="2024-10-08T19:53:40.749257363Z" level=info msg="StartContainer for \"767a85aef01737d29e034a7f2b4b47d24eeb5e96d23d1f954ff44b82f3e8e053\" returns successfully" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.752 [WARNING][4738] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c1fbc285-e14b-4647-ab1f-3d69ffb9be3b", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0048d34cda1c2b938290ed751a95d8e2b8eff8096737051cafdfe24fb5347719", Pod:"coredns-7db6d8ff4d-pcbsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia34d0c6b136", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.752 [INFO][4738] k8s.go 608: Cleaning up netns ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.753 [INFO][4738] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" iface="eth0" netns="" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.753 [INFO][4738] k8s.go 615: Releasing IP address(es) ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.753 [INFO][4738] utils.go 188: Calico CNI releasing IP address ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.779 [INFO][4761] ipam_plugin.go 417: Releasing address using handleID ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.779 [INFO][4761] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.779 [INFO][4761] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.785 [WARNING][4761] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.785 [INFO][4761] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" HandleID="k8s-pod-network.8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Workload="localhost-k8s-coredns--7db6d8ff4d--pcbsv-eth0" Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.788 [INFO][4761] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:40.794743 containerd[1461]: 2024-10-08 19:53:40.791 [INFO][4738] k8s.go 621: Teardown processing complete. ContainerID="8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0" Oct 8 19:53:40.795510 containerd[1461]: time="2024-10-08T19:53:40.794811257Z" level=info msg="TearDown network for sandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\" successfully" Oct 8 19:53:40.806813 containerd[1461]: time="2024-10-08T19:53:40.806738933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:53:40.807035 containerd[1461]: time="2024-10-08T19:53:40.806838923Z" level=info msg="RemovePodSandbox \"8c8fcfff5b7a8a912fb2869ef5e1442b34f64e0332b5e1eb91a768033cec8ec0\" returns successfully" Oct 8 19:53:40.807550 containerd[1461]: time="2024-10-08T19:53:40.807518902Z" level=info msg="StopPodSandbox for \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\"" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.860 [WARNING][4785] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c", Pod:"coredns-7db6d8ff4d-84xkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1697cb3cc3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.866 [INFO][4785] k8s.go 608: Cleaning up netns ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.866 [INFO][4785] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" iface="eth0" netns="" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.866 [INFO][4785] k8s.go 615: Releasing IP address(es) ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.866 [INFO][4785] utils.go 188: Calico CNI releasing IP address ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.896 [INFO][4794] ipam_plugin.go 417: Releasing address using handleID ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.897 [INFO][4794] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.897 [INFO][4794] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.903 [WARNING][4794] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.903 [INFO][4794] ipam_plugin.go 445: Releasing address using workloadID ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.905 [INFO][4794] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:40.910880 containerd[1461]: 2024-10-08 19:53:40.908 [INFO][4785] k8s.go 621: Teardown processing complete. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:40.911361 containerd[1461]: time="2024-10-08T19:53:40.910965778Z" level=info msg="TearDown network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\" successfully" Oct 8 19:53:40.911361 containerd[1461]: time="2024-10-08T19:53:40.911000523Z" level=info msg="StopPodSandbox for \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\" returns successfully" Oct 8 19:53:40.911967 containerd[1461]: time="2024-10-08T19:53:40.911615850Z" level=info msg="RemovePodSandbox for \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\"" Oct 8 19:53:40.911967 containerd[1461]: time="2024-10-08T19:53:40.911646999Z" level=info msg="Forcibly stopping sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\"" Oct 8 19:53:40.956180 systemd-networkd[1394]: cali0da8af8f6b4: Gained IPv6LL Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.953 [WARNING][4817] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"57d1dbbc-3c1e-49e7-917f-8d2167c92f3d", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 52, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1777a481ab2c8dd99096faf4fa73be6cb53da71b5da177950674783d256c431c", Pod:"coredns-7db6d8ff4d-84xkt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1697cb3cc3e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.953 [INFO][4817] k8s.go 608: Cleaning up netns ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.953 [INFO][4817] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" iface="eth0" netns="" Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.953 [INFO][4817] k8s.go 615: Releasing IP address(es) ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.953 [INFO][4817] utils.go 188: Calico CNI releasing IP address ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.980 [INFO][4825] ipam_plugin.go 417: Releasing address using handleID ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.980 [INFO][4825] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.980 [INFO][4825] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.988 [WARNING][4825] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.988 [INFO][4825] ipam_plugin.go 445: Releasing address using workloadID ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" HandleID="k8s-pod-network.772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Workload="localhost-k8s-coredns--7db6d8ff4d--84xkt-eth0" Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:40.999 [INFO][4825] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:53:41.005269 containerd[1461]: 2024-10-08 19:53:41.002 [INFO][4817] k8s.go 621: Teardown processing complete. ContainerID="772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f" Oct 8 19:53:41.005933 containerd[1461]: time="2024-10-08T19:53:41.005870845Z" level=info msg="TearDown network for sandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\" successfully" Oct 8 19:53:41.017826 containerd[1461]: time="2024-10-08T19:53:41.017749854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:53:41.018138 containerd[1461]: time="2024-10-08T19:53:41.017850685Z" level=info msg="RemovePodSandbox \"772f753e7594354ed0d85b53af7c03198170d0227f09e3eb21de622df862c26f\" returns successfully" Oct 8 19:53:41.537391 kubelet[2668]: E1008 19:53:41.537009 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:41.757147 kubelet[2668]: I1008 19:53:41.757050 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75665d5dcd-2wtg4" podStartSLOduration=33.709871256 podStartE2EDuration="36.757031898s" podCreationTimestamp="2024-10-08 19:53:05 +0000 UTC" firstStartedPulling="2024-10-08 19:53:37.524655335 +0000 UTC m=+57.577797005" lastFinishedPulling="2024-10-08 19:53:40.571815977 +0000 UTC m=+60.624957647" observedRunningTime="2024-10-08 19:53:41.756778147 +0000 UTC m=+61.809919808" watchObservedRunningTime="2024-10-08 19:53:41.757031898 +0000 UTC m=+61.810173568" Oct 8 19:53:42.179471 kubelet[2668]: I1008 19:53:42.178795 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-84xkt" podStartSLOduration=46.178768897 podStartE2EDuration="46.178768897s" podCreationTimestamp="2024-10-08 19:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:53:42.158994135 +0000 UTC m=+62.212135805" watchObservedRunningTime="2024-10-08 19:53:42.178768897 +0000 UTC m=+62.231910567" Oct 8 19:53:42.571286 kubelet[2668]: E1008 19:53:42.571247 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:42.590633 containerd[1461]: time="2024-10-08T19:53:42.590549432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:42.591387 containerd[1461]: time="2024-10-08T19:53:42.591314572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 19:53:42.594443 containerd[1461]: time="2024-10-08T19:53:42.594391103Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:42.597094 containerd[1461]: time="2024-10-08T19:53:42.597035615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:42.597639 containerd[1461]: time="2024-10-08T19:53:42.597596789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 2.025577957s" Oct 8 19:53:42.597639 containerd[1461]: time="2024-10-08T19:53:42.597628780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 19:53:42.600097 containerd[1461]: time="2024-10-08T19:53:42.600065458Z" level=info msg="CreateContainer within sandbox \"ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:53:42.628020 containerd[1461]: time="2024-10-08T19:53:42.627954479Z" level=info msg="CreateContainer within sandbox \"ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"453332137aca8c34ddd9e953b79529482954c8b627ea5eb472b10cf78d5bbe09\"" Oct 8 19:53:42.629987 containerd[1461]: time="2024-10-08T19:53:42.629946114Z" level=info msg="StartContainer for \"453332137aca8c34ddd9e953b79529482954c8b627ea5eb472b10cf78d5bbe09\"" Oct 8 19:53:42.667263 systemd[1]: run-containerd-runc-k8s.io-453332137aca8c34ddd9e953b79529482954c8b627ea5eb472b10cf78d5bbe09-runc.cYaSD6.mount: Deactivated successfully. Oct 8 19:53:42.677127 systemd[1]: Started cri-containerd-453332137aca8c34ddd9e953b79529482954c8b627ea5eb472b10cf78d5bbe09.scope - libcontainer container 453332137aca8c34ddd9e953b79529482954c8b627ea5eb472b10cf78d5bbe09. Oct 8 19:53:42.853778 containerd[1461]: time="2024-10-08T19:53:42.853654966Z" level=info msg="StartContainer for \"453332137aca8c34ddd9e953b79529482954c8b627ea5eb472b10cf78d5bbe09\" returns successfully" Oct 8 19:53:42.855173 containerd[1461]: time="2024-10-08T19:53:42.855118659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:53:43.575081 kubelet[2668]: E1008 19:53:43.575045 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:43.599977 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:45532.service - OpenSSH per-connection server daemon (10.0.0.1:45532). Oct 8 19:53:43.823487 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 45532 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:43.825733 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:43.830563 systemd-logind[1446]: New session 16 of user core. Oct 8 19:53:43.836229 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:53:43.980104 sshd[4907]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:43.983358 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:45532.service: Deactivated successfully. Oct 8 19:53:43.985659 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:53:43.987244 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:53:43.988331 systemd-logind[1446]: Removed session 16. Oct 8 19:53:45.912269 containerd[1461]: time="2024-10-08T19:53:45.912213833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:45.913067 containerd[1461]: time="2024-10-08T19:53:45.912989303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 19:53:45.914299 containerd[1461]: time="2024-10-08T19:53:45.914272404Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:45.916819 containerd[1461]: time="2024-10-08T19:53:45.916788190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:53:45.917644 containerd[1461]: time="2024-10-08T19:53:45.917599818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 3.062445419s" Oct 8 19:53:45.917694 containerd[1461]: time="2024-10-08T19:53:45.917647729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 19:53:45.919774 containerd[1461]: time="2024-10-08T19:53:45.919742657Z" level=info msg="CreateContainer within sandbox \"ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:53:45.940197 containerd[1461]: time="2024-10-08T19:53:45.940150229Z" level=info msg="CreateContainer within sandbox \"ebbb244ed6fd8ce549b80fafa4cb2efb1d7d6c333e9475be4a50f1c1334c17a5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b2363f251f996d7505562b93cae924540dc9a3a9072b5a3cf2e28d9cceb2a678\"" Oct 8 19:53:45.940687 containerd[1461]: time="2024-10-08T19:53:45.940664873Z" level=info msg="StartContainer for \"b2363f251f996d7505562b93cae924540dc9a3a9072b5a3cf2e28d9cceb2a678\"" Oct 8 19:53:45.970117 systemd[1]: Started cri-containerd-b2363f251f996d7505562b93cae924540dc9a3a9072b5a3cf2e28d9cceb2a678.scope - libcontainer container b2363f251f996d7505562b93cae924540dc9a3a9072b5a3cf2e28d9cceb2a678. Oct 8 19:53:46.004432 containerd[1461]: time="2024-10-08T19:53:46.004316308Z" level=info msg="StartContainer for \"b2363f251f996d7505562b93cae924540dc9a3a9072b5a3cf2e28d9cceb2a678\" returns successfully" Oct 8 19:53:46.119178 kubelet[2668]: I1008 19:53:46.119112 2668 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:53:46.119178 kubelet[2668]: I1008 19:53:46.119156 2668 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:53:46.593466 kubelet[2668]: I1008 19:53:46.592832 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-46ktg" podStartSLOduration=36.16644363 podStartE2EDuration="41.59280985s" podCreationTimestamp="2024-10-08 19:53:05 +0000 UTC" firstStartedPulling="2024-10-08 19:53:40.492049793 +0000 UTC m=+60.545191463" lastFinishedPulling="2024-10-08 19:53:45.918416013 +0000 UTC m=+65.971557683" observedRunningTime="2024-10-08 19:53:46.592031896 +0000 UTC m=+66.645173566" watchObservedRunningTime="2024-10-08 19:53:46.59280985 +0000 UTC m=+66.645951520" Oct 8 19:53:48.992987 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Oct 8 19:53:49.033362 sshd[4966]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:49.035412 sshd[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:49.040116 systemd-logind[1446]: New session 17 of user core. Oct 8 19:53:49.052162 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:53:49.206196 sshd[4966]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:49.216279 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:45652.service: Deactivated successfully. Oct 8 19:53:49.218391 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:53:49.219970 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:53:49.228246 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:45654.service - OpenSSH per-connection server daemon (10.0.0.1:45654). Oct 8 19:53:49.229729 systemd-logind[1446]: Removed session 17. Oct 8 19:53:49.263802 sshd[4980]: Accepted publickey for core from 10.0.0.1 port 45654 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:49.265433 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:49.269223 systemd-logind[1446]: New session 18 of user core. Oct 8 19:53:49.278023 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:53:49.571029 sshd[4980]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:49.582534 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:45654.service: Deactivated successfully. Oct 8 19:53:49.584536 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:53:49.586189 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:53:49.591235 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:45658.service - OpenSSH per-connection server daemon (10.0.0.1:45658). Oct 8 19:53:49.592135 systemd-logind[1446]: Removed session 18. Oct 8 19:53:49.627626 sshd[4992]: Accepted publickey for core from 10.0.0.1 port 45658 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:49.629221 sshd[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:49.633249 systemd-logind[1446]: New session 19 of user core. Oct 8 19:53:49.640068 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:53:50.039318 kubelet[2668]: E1008 19:53:50.039287 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:51.651896 sshd[4992]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:51.662321 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:45658.service: Deactivated successfully. Oct 8 19:53:51.664609 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:53:51.666350 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:53:51.672261 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:52762.service - OpenSSH per-connection server daemon (10.0.0.1:52762). Oct 8 19:53:51.673190 systemd-logind[1446]: Removed session 19. Oct 8 19:53:51.720239 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 52762 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:51.721997 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:51.726134 systemd-logind[1446]: New session 20 of user core. Oct 8 19:53:51.738102 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:53:52.170139 sshd[5024]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:52.177933 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:52762.service: Deactivated successfully. Oct 8 19:53:52.179825 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:53:52.181565 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:53:52.187399 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:52776.service - OpenSSH per-connection server daemon (10.0.0.1:52776). Oct 8 19:53:52.188674 systemd-logind[1446]: Removed session 20. Oct 8 19:53:52.220861 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 52776 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:52.222808 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:52.229440 systemd-logind[1446]: New session 21 of user core. Oct 8 19:53:52.238117 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:53:52.353928 sshd[5036]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:52.358854 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:52776.service: Deactivated successfully. Oct 8 19:53:52.361272 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:53:52.361987 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:53:52.362953 systemd-logind[1446]: Removed session 21. Oct 8 19:53:52.745785 kubelet[2668]: E1008 19:53:52.745746 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:53:53.410909 systemd[1]: run-containerd-runc-k8s.io-767a85aef01737d29e034a7f2b4b47d24eeb5e96d23d1f954ff44b82f3e8e053-runc.wkMetv.mount: Deactivated successfully. Oct 8 19:53:57.366491 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:52782.service - OpenSSH per-connection server daemon (10.0.0.1:52782). Oct 8 19:53:57.408063 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 52782 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:53:57.409889 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:57.414575 systemd-logind[1446]: New session 22 of user core. Oct 8 19:53:57.420266 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:53:57.534584 sshd[5105]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:57.538313 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:52782.service: Deactivated successfully. Oct 8 19:53:57.540456 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:53:57.541076 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:53:57.541873 systemd-logind[1446]: Removed session 22. Oct 8 19:54:02.548211 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:48266.service - OpenSSH per-connection server daemon (10.0.0.1:48266). Oct 8 19:54:02.588218 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 48266 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:54:02.590100 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:02.594571 systemd-logind[1446]: New session 23 of user core. Oct 8 19:54:02.602168 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:54:02.709760 sshd[5143]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:02.715294 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:48266.service: Deactivated successfully. Oct 8 19:54:02.718100 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:54:02.718807 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:54:02.719878 systemd-logind[1446]: Removed session 23. Oct 8 19:54:06.039819 kubelet[2668]: E1008 19:54:06.039759 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:07.722636 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:48274.service - OpenSSH per-connection server daemon (10.0.0.1:48274). Oct 8 19:54:07.764850 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 48274 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:54:07.766886 sshd[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:07.771558 systemd-logind[1446]: New session 24 of user core. Oct 8 19:54:07.778221 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:54:07.889031 sshd[5163]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:07.893024 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:48274.service: Deactivated successfully. Oct 8 19:54:07.895714 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:54:07.896613 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:54:07.897719 systemd-logind[1446]: Removed session 24. Oct 8 19:54:09.039462 kubelet[2668]: E1008 19:54:09.039411 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:12.294960 kubelet[2668]: I1008 19:54:12.294053 2668 topology_manager.go:215] "Topology Admit Handler" podUID="65b8017b-c2df-4cd0-8bdb-7da30a8d9b67" podNamespace="calico-apiserver" podName="calico-apiserver-7cfdcf7ff-g4jfj" Oct 8 19:54:12.299643 kubelet[2668]: W1008 19:54:12.299607 2668 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Oct 8 19:54:12.299803 kubelet[2668]: E1008 19:54:12.299656 2668 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Oct 8 19:54:12.307164 systemd[1]: Created slice kubepods-besteffort-pod65b8017b_c2df_4cd0_8bdb_7da30a8d9b67.slice - libcontainer container kubepods-besteffort-pod65b8017b_c2df_4cd0_8bdb_7da30a8d9b67.slice. Oct 8 19:54:12.309270 kubelet[2668]: I1008 19:54:12.308837 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/65b8017b-c2df-4cd0-8bdb-7da30a8d9b67-calico-apiserver-certs\") pod \"calico-apiserver-7cfdcf7ff-g4jfj\" (UID: \"65b8017b-c2df-4cd0-8bdb-7da30a8d9b67\") " pod="calico-apiserver/calico-apiserver-7cfdcf7ff-g4jfj" Oct 8 19:54:12.309357 kubelet[2668]: I1008 19:54:12.309273 2668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st294\" (UniqueName: \"kubernetes.io/projected/65b8017b-c2df-4cd0-8bdb-7da30a8d9b67-kube-api-access-st294\") pod \"calico-apiserver-7cfdcf7ff-g4jfj\" (UID: \"65b8017b-c2df-4cd0-8bdb-7da30a8d9b67\") " pod="calico-apiserver/calico-apiserver-7cfdcf7ff-g4jfj" Oct 8 19:54:12.904150 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:37816.service - OpenSSH per-connection server daemon (10.0.0.1:37816). Oct 8 19:54:12.941344 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 37816 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:54:12.943473 sshd[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:12.948521 systemd-logind[1446]: New session 25 of user core. Oct 8 19:54:12.956248 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:54:13.076602 sshd[5188]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:13.081138 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:37816.service: Deactivated successfully. Oct 8 19:54:13.083820 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:54:13.084581 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:54:13.085772 systemd-logind[1446]: Removed session 25. Oct 8 19:54:13.513714 containerd[1461]: time="2024-10-08T19:54:13.513652111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfdcf7ff-g4jfj,Uid:65b8017b-c2df-4cd0-8bdb-7da30a8d9b67,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:54:13.653227 systemd-networkd[1394]: cali3bdd6ef6c53: Link UP Oct 8 19:54:13.655337 systemd-networkd[1394]: cali3bdd6ef6c53: Gained carrier Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.576 [INFO][5205] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0 calico-apiserver-7cfdcf7ff- calico-apiserver 65b8017b-c2df-4cd0-8bdb-7da30a8d9b67 1172 0 2024-10-08 19:54:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cfdcf7ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cfdcf7ff-g4jfj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3bdd6ef6c53 [] []}} ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.576 [INFO][5205] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.607 [INFO][5218] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" HandleID="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Workload="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.615 [INFO][5218] ipam_plugin.go 270: Auto assigning IP ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" HandleID="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Workload="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cfdcf7ff-g4jfj", "timestamp":"2024-10-08 19:54:13.607775713 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.615 [INFO][5218] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.615 [INFO][5218] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.616 [INFO][5218] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.617 [INFO][5218] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.622 [INFO][5218] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.626 [INFO][5218] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.628 [INFO][5218] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.631 [INFO][5218] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.631 [INFO][5218] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.633 [INFO][5218] ipam.go 1685: Creating new handle: k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.638 [INFO][5218] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.644 [INFO][5218] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.644 [INFO][5218] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" host="localhost" Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.645 [INFO][5218] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:54:13.669755 containerd[1461]: 2024-10-08 19:54:13.645 [INFO][5218] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" HandleID="k8s-pod-network.631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Workload="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" Oct 8 19:54:13.670560 containerd[1461]: 2024-10-08 19:54:13.648 [INFO][5205] k8s.go 386: Populated endpoint ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0", GenerateName:"calico-apiserver-7cfdcf7ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"65b8017b-c2df-4cd0-8bdb-7da30a8d9b67", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfdcf7ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cfdcf7ff-g4jfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bdd6ef6c53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.670560 containerd[1461]: 2024-10-08 19:54:13.648 [INFO][5205] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" Oct 8 19:54:13.670560 containerd[1461]: 2024-10-08 19:54:13.648 [INFO][5205] dataplane_linux.go 68: Setting the host side veth name to cali3bdd6ef6c53 ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" Oct 8 19:54:13.670560 containerd[1461]: 2024-10-08 19:54:13.651 [INFO][5205] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" Oct 8 19:54:13.670560 containerd[1461]: 2024-10-08 19:54:13.651 [INFO][5205] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0", GenerateName:"calico-apiserver-7cfdcf7ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"65b8017b-c2df-4cd0-8bdb-7da30a8d9b67", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 54, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfdcf7ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b", Pod:"calico-apiserver-7cfdcf7ff-g4jfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bdd6ef6c53", MAC:"72:0c:1d:96:09:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:54:13.670560 containerd[1461]: 2024-10-08 19:54:13.664 [INFO][5205] k8s.go 500: Wrote updated endpoint to datastore ContainerID="631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b" Namespace="calico-apiserver" Pod="calico-apiserver-7cfdcf7ff-g4jfj" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfdcf7ff--g4jfj-eth0" Oct 8 19:54:13.701507 containerd[1461]: time="2024-10-08T19:54:13.701377749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:54:13.701507 containerd[1461]: time="2024-10-08T19:54:13.701473441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:54:13.701672 containerd[1461]: time="2024-10-08T19:54:13.701493619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:13.701672 containerd[1461]: time="2024-10-08T19:54:13.701610479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:54:13.734285 systemd[1]: Started cri-containerd-631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b.scope - libcontainer container 631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b. Oct 8 19:54:13.752981 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:54:13.783490 containerd[1461]: time="2024-10-08T19:54:13.783436993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfdcf7ff-g4jfj,Uid:65b8017b-c2df-4cd0-8bdb-7da30a8d9b67,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b\"" Oct 8 19:54:13.785215 containerd[1461]: time="2024-10-08T19:54:13.785171349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:54:15.196108 systemd-networkd[1394]: cali3bdd6ef6c53: Gained IPv6LL Oct 8 19:54:18.039303 kubelet[2668]: E1008 19:54:18.039257 2668 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:54:18.094609 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:37822.service - OpenSSH per-connection server daemon (10.0.0.1:37822). Oct 8 19:54:19.034847 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 37822 ssh2: RSA SHA256:/xN8BdcoCidXIeJRfI4jO6TdLokQFeWhvR5OfwObqUI Oct 8 19:54:19.036909 sshd[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:19.041555 systemd-logind[1446]: New session 26 of user core. Oct 8 19:54:19.049111 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:54:19.431408 sshd[5294]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:19.435891 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:37822.service: Deactivated successfully. Oct 8 19:54:19.438143 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:54:19.438782 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:54:19.439737 systemd-logind[1446]: Removed session 26. Oct 8 19:54:20.512334 containerd[1461]: time="2024-10-08T19:54:20.512257012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:20.585833 containerd[1461]: time="2024-10-08T19:54:20.585714721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 19:54:20.665717 containerd[1461]: time="2024-10-08T19:54:20.665665901Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:20.693618 containerd[1461]: time="2024-10-08T19:54:20.693412992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:54:20.694555 containerd[1461]: time="2024-10-08T19:54:20.694474836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 6.909265155s" Oct 8 19:54:20.694555 containerd[1461]: time="2024-10-08T19:54:20.694546762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 19:54:20.697661 containerd[1461]: time="2024-10-08T19:54:20.697604455Z" level=info msg="CreateContainer within sandbox \"631cb04bda6319cc5b77a188b0eacdd7002cb91321b82910fd1e631b9862586b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"