Nov 12 20:53:31.974876 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:53:31.974906 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:31.974937 kernel: BIOS-provided physical RAM map: Nov 12 20:53:31.974946 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 12 20:53:31.974954 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 12 20:53:31.974963 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 12 20:53:31.974973 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 12 20:53:31.974982 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 12 20:53:31.974991 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 12 20:53:31.975003 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 12 20:53:31.975016 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 12 20:53:31.975025 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 12 20:53:31.975034 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 12 20:53:31.975043 kernel: NX (Execute Disable) protection: active Nov 12 20:53:31.975054 kernel: APIC: Static calls initialized Nov 12 20:53:31.975067 kernel: SMBIOS 2.8 present. Nov 12 20:53:31.975080 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 12 20:53:31.975089 kernel: Hypervisor detected: KVM Nov 12 20:53:31.975099 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:53:31.975108 kernel: kvm-clock: using sched offset of 3404628265 cycles Nov 12 20:53:31.975119 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:53:31.975128 kernel: tsc: Detected 2794.744 MHz processor Nov 12 20:53:31.975139 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:53:31.975149 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:53:31.975162 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 12 20:53:31.975172 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 12 20:53:31.975182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:53:31.975192 kernel: Using GB pages for direct mapping Nov 12 20:53:31.975202 kernel: ACPI: Early table checksum verification disabled Nov 12 20:53:31.975211 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 12 20:53:31.975221 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:31.975231 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:31.975241 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:31.975254 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 12 20:53:31.975264 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:31.975274 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:31.975284 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:31.975293 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 12 20:53:31.975303 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Nov 12 20:53:31.975313 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Nov 12 20:53:31.975332 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 12 20:53:31.975345 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Nov 12 20:53:31.975355 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Nov 12 20:53:31.975366 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Nov 12 20:53:31.975376 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Nov 12 20:53:31.975386 kernel: No NUMA configuration found Nov 12 20:53:31.975396 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 12 20:53:31.975409 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 12 20:53:31.975420 kernel: Zone ranges: Nov 12 20:53:31.975430 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:53:31.975440 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 12 20:53:31.975450 kernel: Normal empty Nov 12 20:53:31.975460 kernel: Movable zone start for each node Nov 12 20:53:31.975470 kernel: Early memory node ranges Nov 12 20:53:31.975480 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 12 20:53:31.975490 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 12 20:53:31.975500 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 12 20:53:31.975517 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:53:31.975527 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 12 20:53:31.975537 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 12 20:53:31.975547 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 12 20:53:31.975557 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:53:31.975568 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:53:31.975582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 12 20:53:31.975596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:53:31.975610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:53:31.975630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:53:31.975640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:53:31.975651 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:53:31.975661 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 12 20:53:31.975722 kernel: TSC deadline timer available Nov 12 20:53:31.975746 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 12 20:53:31.975772 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 12 20:53:31.975798 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 12 20:53:31.975815 kernel: kvm-guest: setup PV sched yield Nov 12 20:53:31.975834 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 12 20:53:31.975844 kernel: Booting paravirtualized kernel on KVM Nov 12 20:53:31.975855 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:53:31.975866 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 12 20:53:31.975877 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Nov 12 20:53:31.975887 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Nov 12 20:53:31.975896 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 12 20:53:31.975906 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:53:31.975933 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:53:31.975950 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:31.975961 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:53:31.975971 kernel: random: crng init done Nov 12 20:53:31.975982 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:53:31.975992 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 20:53:31.976003 kernel: Fallback order for Node 0: 0 Nov 12 20:53:31.976013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 12 20:53:31.976023 kernel: Policy zone: DMA32 Nov 12 20:53:31.976037 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:53:31.976048 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 136900K reserved, 0K cma-reserved) Nov 12 20:53:31.976058 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 12 20:53:31.976069 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:53:31.976079 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:53:31.976103 kernel: Dynamic Preempt: voluntary Nov 12 20:53:31.976123 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:53:31.976135 kernel: rcu: RCU event tracing is enabled. Nov 12 20:53:31.976145 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 12 20:53:31.976161 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:53:31.976171 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:53:31.976181 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:53:31.976195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:53:31.976206 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 12 20:53:31.976217 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 12 20:53:31.976227 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:53:31.976237 kernel: Console: colour VGA+ 80x25 Nov 12 20:53:31.976248 kernel: printk: console [ttyS0] enabled Nov 12 20:53:31.976262 kernel: ACPI: Core revision 20230628 Nov 12 20:53:31.976273 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 12 20:53:31.976283 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:53:31.976294 kernel: x2apic enabled Nov 12 20:53:31.976305 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:53:31.976315 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 12 20:53:31.976326 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 12 20:53:31.976337 kernel: kvm-guest: setup PV IPIs Nov 12 20:53:31.976362 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 12 20:53:31.976373 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 12 20:53:31.976384 kernel: Calibrating delay loop (skipped) preset value.. 5589.48 BogoMIPS (lpj=2794744) Nov 12 20:53:31.976395 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 12 20:53:31.976409 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 12 20:53:31.976421 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 12 20:53:31.976433 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:53:31.976444 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:53:31.976455 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:53:31.976469 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:53:31.976480 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 12 20:53:31.976491 kernel: RETBleed: Mitigation: untrained return thunk Nov 12 20:53:31.976507 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:53:31.976518 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:53:31.976529 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 12 20:53:31.976540 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 12 20:53:31.976551 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 12 20:53:31.976567 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:53:31.976577 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:53:31.976587 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:53:31.976598 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:53:31.976609 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 12 20:53:31.976619 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:53:31.976631 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:53:31.976641 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:53:31.976651 kernel: landlock: Up and running. Nov 12 20:53:31.976666 kernel: SELinux: Initializing. Nov 12 20:53:31.976686 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:53:31.976697 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 20:53:31.976707 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 12 20:53:31.976718 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:53:31.976728 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:53:31.976743 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 12 20:53:31.976753 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 12 20:53:31.976763 kernel: ... version: 0 Nov 12 20:53:31.976777 kernel: ... bit width: 48 Nov 12 20:53:31.976787 kernel: ... generic registers: 6 Nov 12 20:53:31.976797 kernel: ... value mask: 0000ffffffffffff Nov 12 20:53:31.976808 kernel: ... max period: 00007fffffffffff Nov 12 20:53:31.976818 kernel: ... fixed-purpose events: 0 Nov 12 20:53:31.976828 kernel: ... event mask: 000000000000003f Nov 12 20:53:31.976839 kernel: signal: max sigframe size: 1776 Nov 12 20:53:31.976849 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:53:31.976861 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:53:31.976876 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:53:31.976887 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:53:31.976897 kernel: .... node #0, CPUs: #1 #2 #3 Nov 12 20:53:31.976907 kernel: smp: Brought up 1 node, 4 CPUs Nov 12 20:53:31.976935 kernel: smpboot: Max logical packages: 1 Nov 12 20:53:31.976946 kernel: smpboot: Total of 4 processors activated (22357.95 BogoMIPS) Nov 12 20:53:31.976957 kernel: devtmpfs: initialized Nov 12 20:53:31.976967 kernel: x86/mm: Memory block size: 128MB Nov 12 20:53:31.976978 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:53:31.976994 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 12 20:53:31.977004 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:53:31.977015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:53:31.977026 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:53:31.977036 kernel: audit: type=2000 audit(1731444811.256:1): state=initialized audit_enabled=0 res=1 Nov 12 20:53:31.977047 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:53:31.977058 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:53:31.977068 kernel: cpuidle: using governor menu Nov 12 20:53:31.977078 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:53:31.977093 kernel: dca service started, version 1.12.1 Nov 12 20:53:31.977104 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 12 20:53:31.977114 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 12 20:53:31.977125 kernel: PCI: Using configuration type 1 for base access Nov 12 20:53:31.977136 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:53:31.977146 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:53:31.977157 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:53:31.977168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:53:31.977178 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:53:31.977192 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:53:31.977203 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:53:31.977214 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:53:31.977224 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:53:31.977235 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:53:31.977245 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:53:31.977256 kernel: ACPI: Interpreter enabled Nov 12 20:53:31.977266 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:53:31.977277 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:53:31.977291 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:53:31.977302 kernel: PCI: Using E820 reservations for host bridge windows Nov 12 20:53:31.977313 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 12 20:53:31.977324 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:53:31.977687 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:53:31.977876 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 12 20:53:31.978097 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 12 20:53:31.978120 kernel: PCI host bridge to bus 0000:00 Nov 12 20:53:31.978313 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:53:31.978482 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:53:31.978645 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:53:31.978857 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 12 20:53:31.979113 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 12 20:53:31.979265 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 12 20:53:31.979420 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:53:31.979618 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 12 20:53:31.979837 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 12 20:53:31.980023 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 12 20:53:31.980189 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 12 20:53:31.980351 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 12 20:53:31.980516 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 12 20:53:31.980725 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:53:31.980900 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 12 20:53:31.981100 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 12 20:53:31.981267 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 12 20:53:31.981460 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 12 20:53:31.981633 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 12 20:53:31.981816 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 12 20:53:31.982008 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 12 20:53:31.982198 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:53:31.982363 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 12 20:53:31.982529 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 12 20:53:31.982702 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 12 20:53:31.982871 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 12 20:53:31.983084 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 12 20:53:31.983274 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 12 20:53:31.983464 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 12 20:53:31.983638 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 12 20:53:31.983823 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 12 20:53:31.984040 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 12 20:53:31.984208 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 12 20:53:31.984229 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:53:31.984240 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:53:31.984250 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:53:31.984261 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:53:31.984272 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 12 20:53:31.984282 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 12 20:53:31.984293 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 12 20:53:31.984303 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 12 20:53:31.984314 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 12 20:53:31.984327 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 12 20:53:31.984337 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 12 20:53:31.984348 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 12 20:53:31.984358 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 12 20:53:31.984369 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 12 20:53:31.984380 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 12 20:53:31.984390 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 12 20:53:31.984401 kernel: iommu: Default domain type: Translated Nov 12 20:53:31.984412 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:53:31.984426 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:53:31.984436 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:53:31.984447 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 12 20:53:31.984457 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 12 20:53:31.984619 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 12 20:53:31.984833 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 12 20:53:31.985028 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 12 20:53:31.985046 kernel: vgaarb: loaded Nov 12 20:53:31.985063 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 12 20:53:31.985074 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 12 20:53:31.985085 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:53:31.985096 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:53:31.985106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:53:31.985117 kernel: pnp: PnP ACPI init Nov 12 20:53:31.985336 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 12 20:53:31.985354 kernel: pnp: PnP ACPI: found 6 devices Nov 12 20:53:31.985371 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:53:31.985382 kernel: NET: Registered PF_INET protocol family Nov 12 20:53:31.985393 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:53:31.985404 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 20:53:31.985415 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:53:31.985426 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 20:53:31.985437 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 20:53:31.985447 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 20:53:31.985459 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:53:31.985474 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 20:53:31.985485 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:53:31.985496 kernel: NET: Registered PF_XDP protocol family Nov 12 20:53:31.985658 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:53:31.985831 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:53:31.986004 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:53:31.986155 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 12 20:53:31.986316 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 12 20:53:31.986475 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 12 20:53:31.986498 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:53:31.986509 kernel: Initialise system trusted keyrings Nov 12 20:53:31.986521 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 20:53:31.986531 kernel: Key type asymmetric registered Nov 12 20:53:31.986542 kernel: Asymmetric key parser 'x509' registered Nov 12 20:53:31.986553 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:53:31.986564 kernel: io scheduler mq-deadline registered Nov 12 20:53:31.986574 kernel: io scheduler kyber registered Nov 12 20:53:31.986585 kernel: io scheduler bfq registered Nov 12 20:53:31.986600 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:53:31.986612 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 12 20:53:31.986623 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 12 20:53:31.986634 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 12 20:53:31.986645 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:53:31.986656 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:53:31.986666 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:53:31.986687 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:53:31.986698 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:53:31.986888 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 12 20:53:31.986906 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:53:31.987153 kernel: rtc_cmos 00:04: registered as rtc0 Nov 12 20:53:31.987310 kernel: rtc_cmos 00:04: setting system clock to 2024-11-12T20:53:31 UTC (1731444811) Nov 12 20:53:31.987462 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 12 20:53:31.987478 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 12 20:53:31.987489 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:53:31.987506 kernel: Segment Routing with IPv6 Nov 12 20:53:31.987517 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:53:31.987527 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:53:31.987538 kernel: Key type dns_resolver registered Nov 12 20:53:31.987548 kernel: IPI shorthand broadcast: enabled Nov 12 20:53:31.987558 kernel: sched_clock: Marking stable (951003015, 115666389)->(1089946195, -23276791) Nov 12 20:53:31.987569 kernel: registered taskstats version 1 Nov 12 20:53:31.987580 kernel: Loading compiled-in X.509 certificates Nov 12 20:53:31.987591 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:53:31.987605 kernel: Key type .fscrypt registered Nov 12 20:53:31.987616 kernel: Key type fscrypt-provisioning registered Nov 12 20:53:31.987627 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:53:31.987637 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:53:31.987648 kernel: ima: No architecture policies found Nov 12 20:53:31.987659 kernel: clk: Disabling unused clocks Nov 12 20:53:31.987678 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:53:31.987689 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:53:31.987699 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:53:31.987714 kernel: Run /init as init process Nov 12 20:53:31.987725 kernel: with arguments: Nov 12 20:53:31.987736 kernel: /init Nov 12 20:53:31.987746 kernel: with environment: Nov 12 20:53:31.987757 kernel: HOME=/ Nov 12 20:53:31.987767 kernel: TERM=linux Nov 12 20:53:31.987777 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:53:31.987791 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:31.987809 systemd[1]: Detected virtualization kvm. Nov 12 20:53:31.987820 systemd[1]: Detected architecture x86-64. Nov 12 20:53:31.987831 systemd[1]: Running in initrd. Nov 12 20:53:31.987842 systemd[1]: No hostname configured, using default hostname. Nov 12 20:53:31.987853 systemd[1]: Hostname set to . Nov 12 20:53:31.987865 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:53:31.987877 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:53:31.987888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:31.987903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:31.988015 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:53:31.988047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:31.988062 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:53:31.988074 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:53:31.988092 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:53:31.988104 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:53:31.988116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:31.988128 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:31.988140 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:31.988152 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:31.988164 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:31.988176 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:31.988192 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:31.988203 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:31.988215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:31.988227 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:31.988239 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:31.988251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:31.988263 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:31.988275 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:31.988288 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:53:31.988303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:31.988315 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:53:31.988326 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:53:31.988339 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:31.988351 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:31.988362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:31.988375 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:31.988414 systemd-journald[191]: Collecting audit messages is disabled. Nov 12 20:53:31.988446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:31.988458 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:53:31.988475 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:31.988487 systemd-journald[191]: Journal started Nov 12 20:53:31.988518 systemd-journald[191]: Runtime Journal (/run/log/journal/c2fb37ac5ecf4f41b3a32549660a94ba) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:53:31.976146 systemd-modules-load[193]: Inserted module 'overlay' Nov 12 20:53:32.021930 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:32.021957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:53:32.021973 kernel: Bridge firewalling registered Nov 12 20:53:32.005195 systemd-modules-load[193]: Inserted module 'br_netfilter' Nov 12 20:53:32.024854 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:32.027432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:32.029836 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:32.046112 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:32.049398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:32.052075 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:32.055094 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:32.065594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:32.068267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:32.069862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:32.072278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:32.086071 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:53:32.088546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:32.097764 dracut-cmdline[228]: dracut-dracut-053 Nov 12 20:53:32.100637 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:32.121876 systemd-resolved[230]: Positive Trust Anchors: Nov 12 20:53:32.121894 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:32.121964 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:32.124545 systemd-resolved[230]: Defaulting to hostname 'linux'. Nov 12 20:53:32.125819 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:32.132819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:32.181950 kernel: SCSI subsystem initialized Nov 12 20:53:32.191937 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:53:32.201936 kernel: iscsi: registered transport (tcp) Nov 12 20:53:32.222950 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:53:32.223003 kernel: QLogic iSCSI HBA Driver Nov 12 20:53:32.271726 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:32.280165 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:53:32.309035 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:53:32.309093 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:53:32.310119 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:53:32.351964 kernel: raid6: avx2x4 gen() 29210 MB/s Nov 12 20:53:32.368940 kernel: raid6: avx2x2 gen() 30140 MB/s Nov 12 20:53:32.386073 kernel: raid6: avx2x1 gen() 25423 MB/s Nov 12 20:53:32.386098 kernel: raid6: using algorithm avx2x2 gen() 30140 MB/s Nov 12 20:53:32.404130 kernel: raid6: .... xor() 17691 MB/s, rmw enabled Nov 12 20:53:32.404187 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:53:32.425949 kernel: xor: automatically using best checksumming function avx Nov 12 20:53:32.602957 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:53:32.616965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:32.624076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:32.638772 systemd-udevd[413]: Using default interface naming scheme 'v255'. Nov 12 20:53:32.643581 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:32.656109 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:53:32.668876 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Nov 12 20:53:32.700395 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:32.714126 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:32.782617 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:32.794123 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:53:32.811199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:32.813663 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:32.816751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:32.818144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:32.831164 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:53:32.845638 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 12 20:53:32.869677 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:53:32.869703 kernel: libata version 3.00 loaded. Nov 12 20:53:32.869728 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 12 20:53:32.872462 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:53:32.872477 kernel: GPT:9289727 != 19775487 Nov 12 20:53:32.872489 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:53:32.872499 kernel: GPT:9289727 != 19775487 Nov 12 20:53:32.872509 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:53:32.872520 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:32.872530 kernel: ahci 0000:00:1f.2: version 3.0 Nov 12 20:53:32.887507 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 12 20:53:32.887524 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 12 20:53:32.887795 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 12 20:53:32.888124 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:53:32.888137 kernel: AES CTR mode by8 optimization enabled Nov 12 20:53:32.888147 kernel: scsi host0: ahci Nov 12 20:53:32.888321 kernel: scsi host1: ahci Nov 12 20:53:32.888506 kernel: scsi host2: ahci Nov 12 20:53:32.888704 kernel: scsi host3: ahci Nov 12 20:53:32.888856 kernel: scsi host4: ahci Nov 12 20:53:32.889024 kernel: scsi host5: ahci Nov 12 20:53:32.889171 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 12 20:53:32.889182 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 12 20:53:32.889195 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 12 20:53:32.889211 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 12 20:53:32.889224 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 12 20:53:32.889235 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 12 20:53:32.848695 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:32.868120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:32.868238 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:32.869841 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:32.871004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:32.871138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:32.955661 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (478) Nov 12 20:53:32.955688 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (477) Nov 12 20:53:32.872435 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:32.879161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:32.962634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 12 20:53:33.001148 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 12 20:53:33.002607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:33.009768 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:53:33.014801 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 12 20:53:33.016046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 12 20:53:33.035062 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:53:33.038208 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:33.051718 disk-uuid[556]: Primary Header is updated. Nov 12 20:53:33.051718 disk-uuid[556]: Secondary Entries is updated. Nov 12 20:53:33.051718 disk-uuid[556]: Secondary Header is updated. Nov 12 20:53:33.056940 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:33.056969 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:33.063938 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:33.192987 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:33.193060 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 12 20:53:33.194664 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:33.194689 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 12 20:53:33.196416 kernel: ata3.00: applying bridge limits Nov 12 20:53:33.196436 kernel: ata3.00: configured for UDMA/100 Nov 12 20:53:33.196943 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 12 20:53:33.200937 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:33.200965 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:33.201933 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 12 20:53:33.245534 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 12 20:53:33.257750 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:53:33.257768 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 12 20:53:34.063979 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 12 20:53:34.064407 disk-uuid[562]: The operation has completed successfully. Nov 12 20:53:34.096312 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:53:34.096461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:53:34.119105 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:53:34.126117 sh[593]: Success Nov 12 20:53:34.142960 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 12 20:53:34.183429 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:53:34.197046 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:53:34.208974 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:53:34.215792 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:53:34.215843 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:34.215860 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:53:34.216804 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:53:34.217541 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:53:34.222371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:53:34.224754 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:53:34.235208 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:53:34.237437 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:53:34.248046 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:34.248090 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:34.248106 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:34.251946 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:34.262122 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:53:34.263843 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:34.339473 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:53:34.345106 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:53:34.364437 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:34.375219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:34.402733 systemd-networkd[774]: lo: Link UP Nov 12 20:53:34.403319 systemd-networkd[774]: lo: Gained carrier Nov 12 20:53:34.403110 ignition[754]: Ignition 2.19.0 Nov 12 20:53:34.403119 ignition[754]: Stage: fetch-offline Nov 12 20:53:34.405269 systemd-networkd[774]: Enumeration completed Nov 12 20:53:34.403175 ignition[754]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:34.405644 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:34.403189 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:34.405787 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:34.403310 ignition[754]: parsed url from cmdline: "" Nov 12 20:53:34.405792 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:34.403315 ignition[754]: no config URL provided Nov 12 20:53:34.406849 systemd-networkd[774]: eth0: Link UP Nov 12 20:53:34.403322 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:34.406854 systemd-networkd[774]: eth0: Gained carrier Nov 12 20:53:34.403336 ignition[754]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:34.406861 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:34.403373 ignition[754]: op(1): [started] loading QEMU firmware config module Nov 12 20:53:34.407276 systemd[1]: Reached target network.target - Network. Nov 12 20:53:34.403380 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 12 20:53:34.413430 ignition[754]: op(1): [finished] loading QEMU firmware config module Nov 12 20:53:34.433988 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:53:34.461791 ignition[754]: parsing config with SHA512: d72ff244e687795c30b122b6421bde1bdc1d9346522536abbaf5815e7ecdb40f69469a36fa2e672769a7e4e2563eaea0855020d239786f41975a547e5b31e7d7 Nov 12 20:53:34.467482 unknown[754]: fetched base config from "system" Nov 12 20:53:34.467496 unknown[754]: fetched user config from "qemu" Nov 12 20:53:34.468043 ignition[754]: fetch-offline: fetch-offline passed Nov 12 20:53:34.468130 ignition[754]: Ignition finished successfully Nov 12 20:53:34.469969 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:34.472445 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 12 20:53:34.482058 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:53:34.497076 ignition[786]: Ignition 2.19.0 Nov 12 20:53:34.497087 ignition[786]: Stage: kargs Nov 12 20:53:34.497257 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:34.497269 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:34.498089 ignition[786]: kargs: kargs passed Nov 12 20:53:34.498144 ignition[786]: Ignition finished successfully Nov 12 20:53:34.504628 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:53:34.565063 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:53:34.581162 ignition[795]: Ignition 2.19.0 Nov 12 20:53:34.581174 ignition[795]: Stage: disks Nov 12 20:53:34.581350 ignition[795]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:34.581362 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:34.582242 ignition[795]: disks: disks passed Nov 12 20:53:34.584838 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:53:34.582288 ignition[795]: Ignition finished successfully Nov 12 20:53:34.586425 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:34.588281 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:34.590197 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:34.591209 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:34.593172 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:34.600073 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:53:34.630242 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 20:53:35.064840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:53:35.074018 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:53:35.187947 kernel: EXT4-fs (vda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:53:35.188389 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:53:35.190526 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:35.200020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:35.204175 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:53:35.206623 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:53:35.206667 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:53:35.215739 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Nov 12 20:53:35.215778 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:35.215792 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:35.215803 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:35.206690 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:35.217988 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:35.220057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:35.222188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:53:35.236120 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:53:35.275940 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:53:35.281546 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:53:35.287295 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:53:35.292338 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:53:35.399760 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:35.416027 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:53:35.418057 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:53:35.425289 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:53:35.427494 kernel: BTRFS info (device vda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:35.447082 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:53:35.451368 ignition[929]: INFO : Ignition 2.19.0 Nov 12 20:53:35.451368 ignition[929]: INFO : Stage: mount Nov 12 20:53:35.453099 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:35.453099 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:35.453099 ignition[929]: INFO : mount: mount passed Nov 12 20:53:35.453099 ignition[929]: INFO : Ignition finished successfully Nov 12 20:53:35.454267 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:53:35.465034 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:53:35.474105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:35.486943 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Nov 12 20:53:35.487014 kernel: BTRFS info (device vda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:35.489061 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:35.489088 kernel: BTRFS info (device vda6): using free space tree Nov 12 20:53:35.492947 kernel: BTRFS info (device vda6): auto enabling async discard Nov 12 20:53:35.495894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:35.518951 ignition[958]: INFO : Ignition 2.19.0 Nov 12 20:53:35.518951 ignition[958]: INFO : Stage: files Nov 12 20:53:35.518951 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:35.518951 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:35.524353 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:53:35.524353 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:53:35.524353 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:53:35.520043 systemd-networkd[774]: eth0: Gained IPv6LL Nov 12 20:53:35.530652 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:53:35.530652 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:53:35.530652 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:53:35.530652 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:53:35.530652 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:53:35.530652 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:35.530652 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:35.525107 unknown[958]: wrote ssh authorized keys file for user: core Nov 12 20:53:35.565299 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:53:35.634964 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:35.637113 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:35.638868 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:35.638868 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:35.642395 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:35.644276 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:35.646065 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:35.647846 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:35.649610 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:35.651610 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:35.653472 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:35.655341 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:35.657887 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:35.660424 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:35.662566 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:53:36.040530 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:53:36.714181 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:53:36.714181 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 20:53:36.718831 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:53:36.721715 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:53:36.721715 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 20:53:36.721715 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 20:53:36.727870 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:36.730519 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:36.730519 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 20:53:36.730519 ignition[958]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 12 20:53:36.735547 ignition[958]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:53:36.737822 ignition[958]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 12 20:53:36.737822 ignition[958]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 12 20:53:36.737822 ignition[958]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Nov 12 20:53:36.775634 ignition[958]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:53:36.794750 ignition[958]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 12 20:53:36.796959 ignition[958]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Nov 12 20:53:36.796959 ignition[958]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:36.800451 ignition[958]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:36.802858 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:36.804860 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:36.806869 ignition[958]: INFO : files: files passed Nov 12 20:53:36.807736 ignition[958]: INFO : Ignition finished successfully Nov 12 20:53:36.810715 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:53:36.819227 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:53:36.821977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:53:36.825175 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:53:36.825311 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:53:36.840302 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Nov 12 20:53:36.846415 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:36.846415 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:36.851543 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:36.856224 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:36.858141 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:53:36.865144 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:53:36.903623 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:53:36.903826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:53:36.905021 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:53:36.909880 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:53:36.910453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:53:36.923195 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:53:36.943678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:36.955281 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:53:36.968520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:36.969389 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:36.969778 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:53:36.970261 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:53:36.970395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:36.981115 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:53:36.982573 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:53:36.983280 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:53:36.983604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:36.983958 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:36.984464 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:53:36.984838 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:36.985258 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:53:36.985620 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:53:36.985957 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:53:36.986464 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:53:36.986688 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:37.005038 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:37.005724 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:37.006204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:53:37.011266 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:37.013972 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:53:37.014189 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:37.017180 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:53:37.017339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:37.018048 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:53:37.018469 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:53:37.029027 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:37.029693 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:53:37.030210 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:53:37.033974 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:53:37.034105 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:37.034465 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:53:37.034581 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:37.037681 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:53:37.037814 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:37.039678 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:53:37.039811 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:53:37.059125 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:53:37.060540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:53:37.061484 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:53:37.061704 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:37.069563 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:53:37.069704 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:37.076488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:53:37.076626 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:53:37.093575 ignition[1012]: INFO : Ignition 2.19.0 Nov 12 20:53:37.093575 ignition[1012]: INFO : Stage: umount Nov 12 20:53:37.093575 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:37.093575 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 12 20:53:37.093174 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:53:37.099005 ignition[1012]: INFO : umount: umount passed Nov 12 20:53:37.099005 ignition[1012]: INFO : Ignition finished successfully Nov 12 20:53:37.104222 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:53:37.104394 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:53:37.105039 systemd[1]: Stopped target network.target - Network. Nov 12 20:53:37.107793 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:53:37.107868 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:53:37.109781 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:53:37.109842 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:53:37.120460 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:53:37.120521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:53:37.122437 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:53:37.122493 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:37.122979 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:53:37.126862 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:53:37.136854 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:53:37.138170 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:53:37.138959 systemd-networkd[774]: eth0: DHCPv6 lease lost Nov 12 20:53:37.143341 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:53:37.144673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:53:37.148812 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:53:37.150072 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:37.165092 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:53:37.167518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:53:37.168800 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:37.171613 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:37.171681 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:37.174976 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:53:37.175041 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:37.178370 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:53:37.179498 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:37.182174 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:37.198840 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:53:37.199019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:53:37.210258 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:53:37.210459 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:37.213349 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:53:37.213429 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:37.214549 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:53:37.214594 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:37.214848 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:53:37.214897 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:37.215727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:53:37.215782 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:37.216555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:37.216609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:37.226862 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:53:37.227594 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:53:37.227649 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:37.227974 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:53:37.228026 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:37.228281 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:53:37.228325 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:37.228632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:37.228680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:37.250889 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:53:37.251076 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:53:37.701821 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:53:37.701982 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:53:37.715108 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:53:37.715318 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:53:37.715379 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:37.726080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:53:37.734308 systemd[1]: Switching root. Nov 12 20:53:37.770491 systemd-journald[191]: Journal stopped Nov 12 20:53:39.112006 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Nov 12 20:53:39.112078 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:53:39.112097 kernel: SELinux: policy capability open_perms=1 Nov 12 20:53:39.112109 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:53:39.112121 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:53:39.112140 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:53:39.112158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:53:39.112169 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:53:39.112180 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:53:39.112197 kernel: audit: type=1403 audit(1731444818.268:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:53:39.112215 systemd[1]: Successfully loaded SELinux policy in 44.735ms. Nov 12 20:53:39.112238 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.475ms. Nov 12 20:53:39.112251 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:39.112263 systemd[1]: Detected virtualization kvm. Nov 12 20:53:39.112281 systemd[1]: Detected architecture x86-64. Nov 12 20:53:39.112293 systemd[1]: Detected first boot. Nov 12 20:53:39.112305 systemd[1]: Initializing machine ID from VM UUID. Nov 12 20:53:39.112317 zram_generator::config[1073]: No configuration found. Nov 12 20:53:39.112331 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:53:39.112344 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:53:39.112356 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 12 20:53:39.112369 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:53:39.112387 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:53:39.112400 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:53:39.112413 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:53:39.112425 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:53:39.112437 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:53:39.112450 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:53:39.112462 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:53:39.112474 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:39.112486 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:39.112512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:53:39.112525 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:53:39.112538 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:53:39.112550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:39.112562 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:53:39.112575 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:39.112587 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:53:39.112599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:39.112616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:39.112641 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:39.112655 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:39.112668 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:53:39.112680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:53:39.112693 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:39.112708 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:39.112720 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:39.112732 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:39.112749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:39.112761 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:53:39.112773 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:53:39.112786 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:53:39.112798 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:53:39.112810 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:39.112827 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:53:39.112840 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:53:39.112852 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:53:39.112871 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:53:39.112883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:39.112896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:39.112908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:53:39.113028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:39.113044 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:39.113059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:39.113074 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:53:39.113097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:39.113113 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:53:39.113128 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 20:53:39.113143 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 20:53:39.113155 kernel: fuse: init (API version 7.39) Nov 12 20:53:39.113167 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:39.113179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:39.113191 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:53:39.113209 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:53:39.113221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:39.113234 kernel: loop: module loaded Nov 12 20:53:39.113246 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:39.113259 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:53:39.113272 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:53:39.113285 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:53:39.113297 kernel: ACPI: bus type drm_connector registered Nov 12 20:53:39.113329 systemd-journald[1165]: Collecting audit messages is disabled. Nov 12 20:53:39.113358 systemd-journald[1165]: Journal started Nov 12 20:53:39.113381 systemd-journald[1165]: Runtime Journal (/run/log/journal/c2fb37ac5ecf4f41b3a32549660a94ba) is 6.0M, max 48.4M, 42.3M free. Nov 12 20:53:39.117070 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:39.118780 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:53:39.120421 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:53:39.121968 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:53:39.123620 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:53:39.125558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:39.127537 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:53:39.127832 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:53:39.129789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:39.130092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:39.132403 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:39.132703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:39.134622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:39.134909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:39.136901 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:53:39.137207 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:53:39.139051 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:39.139362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:39.141471 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:39.143470 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:53:39.145790 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:53:39.168006 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:53:39.178019 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:53:39.181173 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:53:39.198992 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:53:39.201686 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:53:39.207075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:53:39.208638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:39.218200 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:53:39.220342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:39.224395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:39.227690 systemd-journald[1165]: Time spent on flushing to /var/log/journal/c2fb37ac5ecf4f41b3a32549660a94ba is 24.950ms for 940 entries. Nov 12 20:53:39.227690 systemd-journald[1165]: System Journal (/var/log/journal/c2fb37ac5ecf4f41b3a32549660a94ba) is 8.0M, max 195.6M, 187.6M free. Nov 12 20:53:39.274774 systemd-journald[1165]: Received client request to flush runtime journal. Nov 12 20:53:39.227305 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:39.233041 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:39.234690 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:53:39.236255 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:53:39.251200 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:53:39.255500 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:53:39.258877 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:53:39.271311 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:53:39.277293 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:53:39.281269 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Nov 12 20:53:39.281292 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Nov 12 20:53:39.284090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:39.289432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:39.312135 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:53:39.349291 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:53:39.362248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:39.381206 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 12 20:53:39.381229 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 12 20:53:39.388337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:40.486195 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:53:40.503068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:40.529715 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Nov 12 20:53:40.545671 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:40.572300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:40.603272 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 20:53:40.622936 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1240) Nov 12 20:53:40.631071 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:53:40.651235 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1247) Nov 12 20:53:40.652939 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1247) Nov 12 20:53:40.662835 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 12 20:53:40.720465 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:53:40.721973 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:53:40.728968 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:53:40.749959 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 12 20:53:40.758512 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 12 20:53:40.758884 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 12 20:53:40.780940 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:53:40.809606 systemd-networkd[1256]: lo: Link UP Nov 12 20:53:40.810015 systemd-networkd[1256]: lo: Gained carrier Nov 12 20:53:40.812013 systemd-networkd[1256]: Enumeration completed Nov 12 20:53:40.812707 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:40.812713 systemd-networkd[1256]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:40.816663 systemd-networkd[1256]: eth0: Link UP Nov 12 20:53:40.816670 systemd-networkd[1256]: eth0: Gained carrier Nov 12 20:53:40.816684 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:40.841230 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:40.852340 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:53:40.855036 systemd-networkd[1256]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 12 20:53:40.856171 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:53:40.867177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:40.941215 kernel: kvm_amd: TSC scaling supported Nov 12 20:53:40.941330 kernel: kvm_amd: Nested Virtualization enabled Nov 12 20:53:40.941361 kernel: kvm_amd: Nested Paging enabled Nov 12 20:53:40.942515 kernel: kvm_amd: LBR virtualization supported Nov 12 20:53:40.942549 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 12 20:53:40.943238 kernel: kvm_amd: Virtual GIF supported Nov 12 20:53:40.966944 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:53:41.015876 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:53:41.018134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:41.045218 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:53:41.057789 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:41.092219 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:53:41.094042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:41.103176 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:53:41.110910 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:41.145973 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:53:41.147659 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:41.149012 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:53:41.149051 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:41.150142 systemd[1]: Reached target machines.target - Containers. Nov 12 20:53:41.152834 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:53:41.163102 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:53:41.165937 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:53:41.167125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:41.168160 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:53:41.170595 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:53:41.174795 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:53:41.177926 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:53:41.192961 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:53:41.194456 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:53:41.207980 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:53:41.208924 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:53:41.216933 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:53:41.307941 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:53:41.334943 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 20:53:41.453960 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:53:41.467945 kernel: loop4: detected capacity change from 0 to 140768 Nov 12 20:53:41.476962 kernel: loop5: detected capacity change from 0 to 211296 Nov 12 20:53:41.483881 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 12 20:53:41.484785 (sd-merge)[1307]: Merged extensions into '/usr'. Nov 12 20:53:41.532629 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:53:41.533064 systemd[1]: Reloading... Nov 12 20:53:41.620796 zram_generator::config[1341]: No configuration found. Nov 12 20:53:41.767547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:41.774995 ldconfig[1292]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:53:41.836243 systemd[1]: Reloading finished in 302 ms. Nov 12 20:53:41.856793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:53:41.860740 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:53:41.877110 systemd[1]: Starting ensure-sysext.service... Nov 12 20:53:41.882038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:41.888208 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:53:41.888228 systemd[1]: Reloading... Nov 12 20:53:41.958856 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:53:41.959411 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:53:41.960819 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:53:41.961282 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 12 20:53:41.961396 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Nov 12 20:53:41.969197 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:41.969218 systemd-tmpfiles[1380]: Skipping /boot Nov 12 20:53:41.979020 zram_generator::config[1411]: No configuration found. Nov 12 20:53:41.986899 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:41.986931 systemd-tmpfiles[1380]: Skipping /boot Nov 12 20:53:42.122711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:42.208890 systemd[1]: Reloading finished in 320 ms. Nov 12 20:53:42.238292 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:42.256607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:42.259702 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:53:42.262348 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:53:42.268544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:42.272723 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:53:42.279234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:42.279966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:42.284361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:42.291275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:42.298377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:42.300097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:42.300222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:42.301609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:42.301861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:42.304785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:42.305087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:42.316543 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:53:42.319439 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:42.320209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:42.327842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:42.328334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:42.336393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:42.341371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:42.346215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:42.348120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:42.351200 augenrules[1491]: No rules Nov 12 20:53:42.353541 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:53:42.355064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:42.358655 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:42.364142 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:53:42.366405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:42.373427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:42.375951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:42.376226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:42.378753 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:53:42.381092 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:42.381400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:42.389653 systemd-resolved[1457]: Positive Trust Anchors: Nov 12 20:53:42.389676 systemd-resolved[1457]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:42.389715 systemd-resolved[1457]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:42.394534 systemd-resolved[1457]: Defaulting to hostname 'linux'. Nov 12 20:53:42.395616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:42.395944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:42.408306 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:42.411165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:42.414452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:42.417692 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:42.419230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:42.419560 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:53:42.419699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:42.420651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:42.422814 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:53:42.424788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:42.425133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:42.427494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:42.427791 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:42.431091 systemd[1]: Finished ensure-sysext.service. Nov 12 20:53:42.436937 systemd[1]: Reached target network.target - Network. Nov 12 20:53:42.437982 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:42.439277 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:42.442227 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 12 20:53:42.471488 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:42.471815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:42.473732 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:42.474149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:42.477552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:42.571865 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 12 20:53:42.573685 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:42.575070 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:53:42.576551 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:53:42.578216 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:53:42.579726 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:53:42.579771 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:43.206390 systemd-resolved[1457]: Clock change detected. Flushing caches. Nov 12 20:53:43.206491 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 12 20:53:43.206538 systemd-timesyncd[1521]: Initial clock synchronization to Tue 2024-11-12 20:53:43.204335 UTC. Nov 12 20:53:43.206820 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:53:43.208198 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:53:43.209697 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:53:43.211184 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:43.213134 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:53:43.216977 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:53:43.220067 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:53:43.227380 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:53:43.228692 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:43.229828 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:43.231090 systemd[1]: System is tainted: cgroupsv1 Nov 12 20:53:43.231153 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:43.231188 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:43.232851 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:53:43.236005 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:53:43.238854 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:53:43.244410 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:53:43.245656 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:53:43.247575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:53:43.252946 jq[1532]: false Nov 12 20:53:43.251783 systemd-networkd[1256]: eth0: Gained IPv6LL Nov 12 20:53:43.263053 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:53:43.270744 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:53:43.274736 dbus-daemon[1531]: [system] SELinux support is enabled Nov 12 20:53:43.278966 extend-filesystems[1534]: Found loop3 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found loop4 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found loop5 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found sr0 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda1 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda2 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda3 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found usr Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda4 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda6 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda7 Nov 12 20:53:43.278966 extend-filesystems[1534]: Found vda9 Nov 12 20:53:43.278966 extend-filesystems[1534]: Checking size of /dev/vda9 Nov 12 20:53:43.306836 extend-filesystems[1534]: Resized partition /dev/vda9 Nov 12 20:53:43.283129 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:53:43.292787 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:53:43.294494 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:53:43.297012 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:53:43.304810 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:53:43.310016 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:53:43.314111 extend-filesystems[1558]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:53:43.319186 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:53:43.324933 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 12 20:53:43.323505 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:53:43.324001 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:53:43.324500 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:53:43.325191 jq[1556]: true Nov 12 20:53:43.326954 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:53:43.332578 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:53:43.333023 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:53:43.337762 update_engine[1553]: I20241112 20:53:43.337216 1553 main.cc:92] Flatcar Update Engine starting Nov 12 20:53:43.341389 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1242) Nov 12 20:53:43.345169 update_engine[1553]: I20241112 20:53:43.345017 1553 update_check_scheduler.cc:74] Next update check in 8m28s Nov 12 20:53:43.361946 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 12 20:53:43.364057 jq[1564]: true Nov 12 20:53:43.369410 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:53:43.390349 extend-filesystems[1558]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 12 20:53:43.390349 extend-filesystems[1558]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 20:53:43.390349 extend-filesystems[1558]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 12 20:53:43.396632 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Nov 12 20:53:43.397768 tar[1563]: linux-amd64/helm Nov 12 20:53:43.400697 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:53:43.401075 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:53:43.426499 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:53:43.428269 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:53:43.430174 systemd-logind[1552]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:53:43.430206 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:53:43.431578 systemd-logind[1552]: New seat seat0. Nov 12 20:53:43.488570 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 12 20:53:43.493366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:43.497841 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:53:43.501123 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:53:43.501173 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:53:43.504044 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:53:43.504072 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:53:43.506339 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:53:43.518094 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:53:43.519938 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:53:43.535789 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:53:43.538719 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:53:43.552207 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:53:43.559170 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 12 20:53:43.559578 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 12 20:53:43.562723 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:53:43.625445 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:53:43.643698 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:53:44.182345 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:53:44.192535 containerd[1570]: time="2024-11-12T20:53:44.192350194Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:53:44.220265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:53:44.227692 containerd[1570]: time="2024-11-12T20:53:44.226991448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:44.229995 containerd[1570]: time="2024-11-12T20:53:44.229335327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:44.229995 containerd[1570]: time="2024-11-12T20:53:44.229399087Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:53:44.229995 containerd[1570]: time="2024-11-12T20:53:44.229427771Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:53:44.229995 containerd[1570]: time="2024-11-12T20:53:44.229669655Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:53:44.229995 containerd[1570]: time="2024-11-12T20:53:44.229697748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:44.229995 containerd[1570]: time="2024-11-12T20:53:44.229792435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:44.229995 containerd[1570]: time="2024-11-12T20:53:44.229811050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:44.230482 containerd[1570]: time="2024-11-12T20:53:44.230453656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:44.230551 containerd[1570]: time="2024-11-12T20:53:44.230535159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:44.230623 containerd[1570]: time="2024-11-12T20:53:44.230606263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:44.230679 containerd[1570]: time="2024-11-12T20:53:44.230665514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:44.230875 containerd[1570]: time="2024-11-12T20:53:44.230855110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:44.231271 containerd[1570]: time="2024-11-12T20:53:44.231248057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:44.231579 containerd[1570]: time="2024-11-12T20:53:44.231554492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:44.231659 containerd[1570]: time="2024-11-12T20:53:44.231643409Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:53:44.231847 containerd[1570]: time="2024-11-12T20:53:44.231827735Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:53:44.232002 containerd[1570]: time="2024-11-12T20:53:44.231982786Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:53:44.269846 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:53:44.283244 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:53:44.283738 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:53:44.292596 containerd[1570]: time="2024-11-12T20:53:44.292543736Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:53:44.292789 containerd[1570]: time="2024-11-12T20:53:44.292767586Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:53:44.292969 containerd[1570]: time="2024-11-12T20:53:44.292899213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:53:44.293060 containerd[1570]: time="2024-11-12T20:53:44.293039536Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:53:44.293166 containerd[1570]: time="2024-11-12T20:53:44.293145565Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:53:44.293475 containerd[1570]: time="2024-11-12T20:53:44.293451209Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:53:44.294230 containerd[1570]: time="2024-11-12T20:53:44.294205515Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:53:44.294490 containerd[1570]: time="2024-11-12T20:53:44.294465553Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:53:44.294568 containerd[1570]: time="2024-11-12T20:53:44.294551224Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:53:44.294638 containerd[1570]: time="2024-11-12T20:53:44.294620995Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:53:44.294713 containerd[1570]: time="2024-11-12T20:53:44.294695234Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.294795 containerd[1570]: time="2024-11-12T20:53:44.294777298Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.294871 containerd[1570]: time="2024-11-12T20:53:44.294856326Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.294981 containerd[1570]: time="2024-11-12T20:53:44.294962646Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.295041 containerd[1570]: time="2024-11-12T20:53:44.295027788Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.295105 containerd[1570]: time="2024-11-12T20:53:44.295087089Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.295180 containerd[1570]: time="2024-11-12T20:53:44.295161008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.295252 containerd[1570]: time="2024-11-12T20:53:44.295236410Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:53:44.295343 containerd[1570]: time="2024-11-12T20:53:44.295327270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.295446 containerd[1570]: time="2024-11-12T20:53:44.295428640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.295528 containerd[1570]: time="2024-11-12T20:53:44.295509111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.295617 containerd[1570]: time="2024-11-12T20:53:44.295597547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.295694 containerd[1570]: time="2024-11-12T20:53:44.295676465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.295785 containerd[1570]: time="2024-11-12T20:53:44.295764170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.295859 containerd[1570]: time="2024-11-12T20:53:44.295841956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.296215 containerd[1570]: time="2024-11-12T20:53:44.296195109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.296302 containerd[1570]: time="2024-11-12T20:53:44.296282954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.296453 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:53:44.297561 containerd[1570]: time="2024-11-12T20:53:44.297534953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.297940 containerd[1570]: time="2024-11-12T20:53:44.297889138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.298027 containerd[1570]: time="2024-11-12T20:53:44.298009684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.298097 containerd[1570]: time="2024-11-12T20:53:44.298081349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.298178 containerd[1570]: time="2024-11-12T20:53:44.298163583Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:53:44.298273 containerd[1570]: time="2024-11-12T20:53:44.298255776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.298340 containerd[1570]: time="2024-11-12T20:53:44.298324756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.298452 containerd[1570]: time="2024-11-12T20:53:44.298430534Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:53:44.299110 containerd[1570]: time="2024-11-12T20:53:44.299087347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:53:44.299366 containerd[1570]: time="2024-11-12T20:53:44.299343428Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:53:44.299499 containerd[1570]: time="2024-11-12T20:53:44.299463964Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:53:44.299499 containerd[1570]: time="2024-11-12T20:53:44.299494711Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:53:44.299596 containerd[1570]: time="2024-11-12T20:53:44.299511022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.299596 containerd[1570]: time="2024-11-12T20:53:44.299531921Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:53:44.299596 containerd[1570]: time="2024-11-12T20:53:44.299552320Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:53:44.299596 containerd[1570]: time="2024-11-12T20:53:44.299567468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:53:44.300190 containerd[1570]: time="2024-11-12T20:53:44.300114495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:53:44.300190 containerd[1570]: time="2024-11-12T20:53:44.300192501Z" level=info msg="Connect containerd service" Nov 12 20:53:44.300516 containerd[1570]: time="2024-11-12T20:53:44.300239109Z" level=info msg="using legacy CRI server" Nov 12 20:53:44.300516 containerd[1570]: time="2024-11-12T20:53:44.300247695Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:53:44.300516 containerd[1570]: time="2024-11-12T20:53:44.300442671Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:53:44.301296 containerd[1570]: time="2024-11-12T20:53:44.301232453Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:53:44.301660 containerd[1570]: time="2024-11-12T20:53:44.301598550Z" level=info msg="Start subscribing containerd event" Nov 12 20:53:44.301840 containerd[1570]: time="2024-11-12T20:53:44.301820717Z" level=info msg="Start recovering state" Nov 12 20:53:44.301997 containerd[1570]: time="2024-11-12T20:53:44.301982110Z" level=info msg="Start event monitor" Nov 12 20:53:44.302061 containerd[1570]: time="2024-11-12T20:53:44.302049637Z" level=info msg="Start snapshots syncer" Nov 12 20:53:44.302115 containerd[1570]: time="2024-11-12T20:53:44.302103427Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:53:44.302161 containerd[1570]: time="2024-11-12T20:53:44.302149474Z" level=info msg="Start streaming server" Nov 12 20:53:44.302803 containerd[1570]: time="2024-11-12T20:53:44.302770640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:53:44.303075 containerd[1570]: time="2024-11-12T20:53:44.303056005Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:53:44.304924 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:53:44.305655 containerd[1570]: time="2024-11-12T20:53:44.305629996Z" level=info msg="containerd successfully booted in 0.121901s" Nov 12 20:53:44.391999 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:53:44.408504 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:53:44.413482 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:53:44.415580 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:53:44.446217 tar[1563]: linux-amd64/LICENSE Nov 12 20:53:44.446217 tar[1563]: linux-amd64/README.md Nov 12 20:53:44.503563 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:53:44.898984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:44.900731 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:53:44.904124 systemd[1]: Startup finished in 7.638s (kernel) + 6.051s (userspace) = 13.690s. Nov 12 20:53:44.935563 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:45.448782 kubelet[1668]: E1112 20:53:45.448666 1668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:45.454071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:45.454484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:47.602218 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:53:47.617276 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:45024.service - OpenSSH per-connection server daemon (10.0.0.1:45024). Nov 12 20:53:47.659612 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 45024 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:47.662764 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:47.672079 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:53:47.680206 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:53:47.682533 systemd-logind[1552]: New session 1 of user core. Nov 12 20:53:47.695784 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:53:47.697993 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:53:47.720567 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:53:47.827242 systemd[1688]: Queued start job for default target default.target. Nov 12 20:53:47.827714 systemd[1688]: Created slice app.slice - User Application Slice. Nov 12 20:53:47.827740 systemd[1688]: Reached target paths.target - Paths. Nov 12 20:53:47.827759 systemd[1688]: Reached target timers.target - Timers. Nov 12 20:53:47.840067 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:53:47.848677 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:53:47.848761 systemd[1688]: Reached target sockets.target - Sockets. Nov 12 20:53:47.848775 systemd[1688]: Reached target basic.target - Basic System. Nov 12 20:53:47.848816 systemd[1688]: Reached target default.target - Main User Target. Nov 12 20:53:47.848852 systemd[1688]: Startup finished in 119ms. Nov 12 20:53:47.849786 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:53:47.851538 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:53:47.911269 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:45028.service - OpenSSH per-connection server daemon (10.0.0.1:45028). Nov 12 20:53:47.945738 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 45028 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:47.947863 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:47.952663 systemd-logind[1552]: New session 2 of user core. Nov 12 20:53:47.966269 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:53:48.025888 sshd[1700]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:48.034207 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:45040.service - OpenSSH per-connection server daemon (10.0.0.1:45040). Nov 12 20:53:48.034753 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:45028.service: Deactivated successfully. Nov 12 20:53:48.037646 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:53:48.038410 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:53:48.039536 systemd-logind[1552]: Removed session 2. Nov 12 20:53:48.065101 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 45040 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:48.066712 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:48.070736 systemd-logind[1552]: New session 3 of user core. Nov 12 20:53:48.080156 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:53:48.131262 sshd[1705]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:48.141154 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:45046.service - OpenSSH per-connection server daemon (10.0.0.1:45046). Nov 12 20:53:48.141692 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:45040.service: Deactivated successfully. Nov 12 20:53:48.143683 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:53:48.144332 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:53:48.145732 systemd-logind[1552]: Removed session 3. Nov 12 20:53:48.170393 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 45046 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:48.172455 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:48.177634 systemd-logind[1552]: New session 4 of user core. Nov 12 20:53:48.188496 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:53:48.247929 sshd[1713]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:48.261262 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:45048.service - OpenSSH per-connection server daemon (10.0.0.1:45048). Nov 12 20:53:48.261991 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:45046.service: Deactivated successfully. Nov 12 20:53:48.265069 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:53:48.266607 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:53:48.267410 systemd-logind[1552]: Removed session 4. Nov 12 20:53:48.290790 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:48.292475 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:48.297109 systemd-logind[1552]: New session 5 of user core. Nov 12 20:53:48.306203 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:53:48.367539 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:53:48.367892 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:48.382758 sudo[1728]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:48.384936 sshd[1721]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:48.401175 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:45054.service - OpenSSH per-connection server daemon (10.0.0.1:45054). Nov 12 20:53:48.401817 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:45048.service: Deactivated successfully. Nov 12 20:53:48.403950 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:53:48.404719 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:53:48.406186 systemd-logind[1552]: Removed session 5. Nov 12 20:53:48.431368 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 45054 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:48.433205 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:48.437422 systemd-logind[1552]: New session 6 of user core. Nov 12 20:53:48.447251 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:53:48.503577 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:53:48.503994 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:48.508349 sudo[1738]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:48.515132 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:53:48.515536 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:48.535170 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:48.537466 auditctl[1741]: No rules Nov 12 20:53:48.539138 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:53:48.539588 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:48.541702 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:48.576716 augenrules[1760]: No rules Nov 12 20:53:48.578880 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:48.580839 sudo[1737]: pam_unix(sudo:session): session closed for user root Nov 12 20:53:48.583246 sshd[1731]: pam_unix(sshd:session): session closed for user core Nov 12 20:53:48.590244 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:45064.service - OpenSSH per-connection server daemon (10.0.0.1:45064). Nov 12 20:53:48.590791 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:45054.service: Deactivated successfully. Nov 12 20:53:48.593633 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:53:48.594678 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:53:48.595787 systemd-logind[1552]: Removed session 6. Nov 12 20:53:48.620590 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 45064 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:53:48.622212 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:53:48.626999 systemd-logind[1552]: New session 7 of user core. Nov 12 20:53:48.636502 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:53:48.691900 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:53:48.692362 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:53:49.004356 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:53:49.004626 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:53:49.287796 dockerd[1791]: time="2024-11-12T20:53:49.287610665Z" level=info msg="Starting up" Nov 12 20:53:50.710663 dockerd[1791]: time="2024-11-12T20:53:50.710573411Z" level=info msg="Loading containers: start." Nov 12 20:53:50.844938 kernel: Initializing XFRM netlink socket Nov 12 20:53:50.951645 systemd-networkd[1256]: docker0: Link UP Nov 12 20:53:50.977486 dockerd[1791]: time="2024-11-12T20:53:50.977323163Z" level=info msg="Loading containers: done." Nov 12 20:53:50.998554 dockerd[1791]: time="2024-11-12T20:53:50.998472305Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:53:50.998750 dockerd[1791]: time="2024-11-12T20:53:50.998623970Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:53:50.998826 dockerd[1791]: time="2024-11-12T20:53:50.998795532Z" level=info msg="Daemon has completed initialization" Nov 12 20:53:51.048043 dockerd[1791]: time="2024-11-12T20:53:51.047944033Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:53:51.048267 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:53:51.863159 containerd[1570]: time="2024-11-12T20:53:51.863112319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:53:52.787422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145163028.mount: Deactivated successfully. Nov 12 20:53:55.046460 containerd[1570]: time="2024-11-12T20:53:55.046355173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:55.047738 containerd[1570]: time="2024-11-12T20:53:55.047656966Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140799" Nov 12 20:53:55.071007 containerd[1570]: time="2024-11-12T20:53:55.070358312Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:55.075057 containerd[1570]: time="2024-11-12T20:53:55.074989864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:55.076510 containerd[1570]: time="2024-11-12T20:53:55.076445697Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 3.213285147s" Nov 12 20:53:55.076588 containerd[1570]: time="2024-11-12T20:53:55.076523553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:53:55.104699 containerd[1570]: time="2024-11-12T20:53:55.104646874Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:53:55.704883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:53:55.720260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:55.925693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:55.938314 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:56.669804 kubelet[2019]: E1112 20:53:56.669640 2019 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:56.677274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:56.677526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:58.122501 containerd[1570]: time="2024-11-12T20:53:58.122412954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:58.126572 containerd[1570]: time="2024-11-12T20:53:58.126501357Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218299" Nov 12 20:53:58.130094 containerd[1570]: time="2024-11-12T20:53:58.129999843Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:58.134517 containerd[1570]: time="2024-11-12T20:53:58.134424687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:58.135534 containerd[1570]: time="2024-11-12T20:53:58.135487011Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 3.030796345s" Nov 12 20:53:58.135534 containerd[1570]: time="2024-11-12T20:53:58.135529060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:53:58.160893 containerd[1570]: time="2024-11-12T20:53:58.160847858Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:53:59.963349 containerd[1570]: time="2024-11-12T20:53:59.963239270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:59.964346 containerd[1570]: time="2024-11-12T20:53:59.964294310Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332660" Nov 12 20:53:59.966415 containerd[1570]: time="2024-11-12T20:53:59.966361249Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:59.970733 containerd[1570]: time="2024-11-12T20:53:59.970661059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:53:59.971817 containerd[1570]: time="2024-11-12T20:53:59.971774549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.810884221s" Nov 12 20:53:59.971885 containerd[1570]: time="2024-11-12T20:53:59.971818852Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:53:59.995952 containerd[1570]: time="2024-11-12T20:53:59.995879669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:54:01.547077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505846146.mount: Deactivated successfully. Nov 12 20:54:03.022235 containerd[1570]: time="2024-11-12T20:54:03.022137245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:03.023591 containerd[1570]: time="2024-11-12T20:54:03.023470157Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616816" Nov 12 20:54:03.026810 containerd[1570]: time="2024-11-12T20:54:03.026761614Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:03.030503 containerd[1570]: time="2024-11-12T20:54:03.030434698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:03.031465 containerd[1570]: time="2024-11-12T20:54:03.031419516Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 3.035464927s" Nov 12 20:54:03.031465 containerd[1570]: time="2024-11-12T20:54:03.031463829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:54:03.058183 containerd[1570]: time="2024-11-12T20:54:03.058128643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:54:03.645849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029754489.mount: Deactivated successfully. Nov 12 20:54:04.580304 containerd[1570]: time="2024-11-12T20:54:04.580242695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:04.581050 containerd[1570]: time="2024-11-12T20:54:04.581001940Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Nov 12 20:54:04.582380 containerd[1570]: time="2024-11-12T20:54:04.582346944Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:04.585197 containerd[1570]: time="2024-11-12T20:54:04.585123225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:04.586457 containerd[1570]: time="2024-11-12T20:54:04.586387738Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.528220983s" Nov 12 20:54:04.586534 containerd[1570]: time="2024-11-12T20:54:04.586460925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:54:04.610269 containerd[1570]: time="2024-11-12T20:54:04.610230446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:54:05.262475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010592739.mount: Deactivated successfully. Nov 12 20:54:05.270509 containerd[1570]: time="2024-11-12T20:54:05.270459744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:05.271365 containerd[1570]: time="2024-11-12T20:54:05.271275706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Nov 12 20:54:05.272615 containerd[1570]: time="2024-11-12T20:54:05.272570075Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:05.275116 containerd[1570]: time="2024-11-12T20:54:05.275074515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:05.276004 containerd[1570]: time="2024-11-12T20:54:05.275956410Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 665.686271ms" Nov 12 20:54:05.276004 containerd[1570]: time="2024-11-12T20:54:05.275993931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:54:05.301713 containerd[1570]: time="2024-11-12T20:54:05.301663317Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:54:05.849078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275018649.mount: Deactivated successfully. Nov 12 20:54:06.927984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:54:06.958315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:07.109734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:07.116719 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:07.666076 kubelet[2176]: E1112 20:54:07.665982 2176 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:07.671138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:07.671461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:09.553138 containerd[1570]: time="2024-11-12T20:54:09.553058463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:09.554363 containerd[1570]: time="2024-11-12T20:54:09.554323948Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Nov 12 20:54:09.555868 containerd[1570]: time="2024-11-12T20:54:09.555818203Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:09.559609 containerd[1570]: time="2024-11-12T20:54:09.559565426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:09.560959 containerd[1570]: time="2024-11-12T20:54:09.560926620Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.2592189s" Nov 12 20:54:09.560959 containerd[1570]: time="2024-11-12T20:54:09.560957538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:54:12.672554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:12.686169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:12.706371 systemd[1]: Reloading requested from client PID 2271 ('systemctl') (unit session-7.scope)... Nov 12 20:54:12.706391 systemd[1]: Reloading... Nov 12 20:54:12.791564 zram_generator::config[2310]: No configuration found. Nov 12 20:54:13.193763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:13.270412 systemd[1]: Reloading finished in 563 ms. Nov 12 20:54:13.323804 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:54:13.323977 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:54:13.324394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:13.342220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:13.497443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:13.503543 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:54:13.553072 kubelet[2370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:13.553072 kubelet[2370]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:54:13.553072 kubelet[2370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:13.553620 kubelet[2370]: I1112 20:54:13.553218 2370 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:54:14.150097 kubelet[2370]: I1112 20:54:14.148532 2370 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:54:14.150097 kubelet[2370]: I1112 20:54:14.148568 2370 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:54:14.150097 kubelet[2370]: I1112 20:54:14.149005 2370 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:54:14.172555 kubelet[2370]: E1112 20:54:14.172502 2370 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.173980 kubelet[2370]: I1112 20:54:14.173943 2370 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:14.219255 kubelet[2370]: I1112 20:54:14.219195 2370 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:54:14.222015 kubelet[2370]: I1112 20:54:14.221973 2370 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:54:14.222216 kubelet[2370]: I1112 20:54:14.222188 2370 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:54:14.224246 kubelet[2370]: I1112 20:54:14.224210 2370 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:54:14.224246 kubelet[2370]: I1112 20:54:14.224236 2370 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:54:14.224453 kubelet[2370]: I1112 20:54:14.224379 2370 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:14.224522 kubelet[2370]: I1112 20:54:14.224504 2370 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:54:14.224552 kubelet[2370]: I1112 20:54:14.224526 2370 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:54:14.224574 kubelet[2370]: I1112 20:54:14.224555 2370 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:54:14.224606 kubelet[2370]: I1112 20:54:14.224574 2370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:54:14.226429 kubelet[2370]: I1112 20:54:14.226407 2370 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:54:14.234344 kubelet[2370]: W1112 20:54:14.234263 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.234344 kubelet[2370]: E1112 20:54:14.234344 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.237190 kubelet[2370]: W1112 20:54:14.237148 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.237190 kubelet[2370]: E1112 20:54:14.237186 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.251667 kubelet[2370]: I1112 20:54:14.251605 2370 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:54:14.252851 kubelet[2370]: W1112 20:54:14.252799 2370 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:54:14.253890 kubelet[2370]: I1112 20:54:14.253710 2370 server.go:1256] "Started kubelet" Nov 12 20:54:14.254508 kubelet[2370]: I1112 20:54:14.254054 2370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:54:14.254508 kubelet[2370]: I1112 20:54:14.254112 2370 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:54:14.273113 kubelet[2370]: I1112 20:54:14.272992 2370 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:54:14.273580 kubelet[2370]: I1112 20:54:14.273556 2370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:54:14.275957 kubelet[2370]: I1112 20:54:14.274796 2370 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:54:14.275957 kubelet[2370]: I1112 20:54:14.274981 2370 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:54:14.275957 kubelet[2370]: I1112 20:54:14.275048 2370 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:54:14.275957 kubelet[2370]: W1112 20:54:14.275465 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.275957 kubelet[2370]: E1112 20:54:14.275504 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.275957 kubelet[2370]: E1112 20:54:14.275556 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:14.275957 kubelet[2370]: E1112 20:54:14.275820 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Nov 12 20:54:14.276452 kubelet[2370]: I1112 20:54:14.276424 2370 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:54:14.276582 kubelet[2370]: I1112 20:54:14.276558 2370 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:54:14.277293 kubelet[2370]: I1112 20:54:14.277261 2370 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:54:14.278788 kubelet[2370]: E1112 20:54:14.278763 2370 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:54:14.280564 kubelet[2370]: I1112 20:54:14.279093 2370 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:54:14.293126 kubelet[2370]: E1112 20:54:14.293087 2370 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.180753ebddf802a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 20:54:14.253683368 +0000 UTC m=+0.745196455,LastTimestamp:2024-11-12 20:54:14.253683368 +0000 UTC m=+0.745196455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 12 20:54:14.308229 kubelet[2370]: I1112 20:54:14.308183 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:54:14.309719 kubelet[2370]: I1112 20:54:14.309673 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:54:14.309719 kubelet[2370]: I1112 20:54:14.309722 2370 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:54:14.309795 kubelet[2370]: I1112 20:54:14.309747 2370 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:54:14.309842 kubelet[2370]: E1112 20:54:14.309823 2370 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:54:14.312090 kubelet[2370]: W1112 20:54:14.310375 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.312090 kubelet[2370]: E1112 20:54:14.310404 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:14.315188 kubelet[2370]: I1112 20:54:14.315165 2370 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:54:14.315188 kubelet[2370]: I1112 20:54:14.315192 2370 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:54:14.315284 kubelet[2370]: I1112 20:54:14.315208 2370 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:14.377396 kubelet[2370]: I1112 20:54:14.377345 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:54:14.377867 kubelet[2370]: E1112 20:54:14.377833 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 12 20:54:14.409974 kubelet[2370]: E1112 20:54:14.409859 2370 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:14.476724 kubelet[2370]: E1112 20:54:14.476661 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Nov 12 20:54:14.579601 kubelet[2370]: I1112 20:54:14.579552 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:54:14.580133 kubelet[2370]: E1112 20:54:14.580044 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 12 20:54:14.610204 kubelet[2370]: E1112 20:54:14.610115 2370 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:14.877345 kubelet[2370]: E1112 20:54:14.877282 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Nov 12 20:54:14.982098 kubelet[2370]: I1112 20:54:14.982059 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:54:14.982412 kubelet[2370]: E1112 20:54:14.982387 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 12 20:54:15.010545 kubelet[2370]: E1112 20:54:15.010507 2370 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:15.220135 kubelet[2370]: W1112 20:54:15.219979 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.220135 kubelet[2370]: E1112 20:54:15.220053 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.554247 kubelet[2370]: I1112 20:54:15.554206 2370 policy_none.go:49] "None policy: Start" Nov 12 20:54:15.555045 kubelet[2370]: I1112 20:54:15.555023 2370 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:54:15.555090 kubelet[2370]: I1112 20:54:15.555049 2370 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:54:15.570675 kubelet[2370]: W1112 20:54:15.570611 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.570675 kubelet[2370]: E1112 20:54:15.570678 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.619892 kubelet[2370]: I1112 20:54:15.619840 2370 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:54:15.620407 kubelet[2370]: I1112 20:54:15.620195 2370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:54:15.622672 kubelet[2370]: E1112 20:54:15.622638 2370 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 12 20:54:15.678691 kubelet[2370]: E1112 20:54:15.678658 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Nov 12 20:54:15.716185 kubelet[2370]: W1112 20:54:15.716145 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.716185 kubelet[2370]: E1112 20:54:15.716182 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.763714 kubelet[2370]: W1112 20:54:15.763662 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.763714 kubelet[2370]: E1112 20:54:15.763712 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:15.784340 kubelet[2370]: I1112 20:54:15.784299 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:54:15.784710 kubelet[2370]: E1112 20:54:15.784676 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 12 20:54:15.810936 kubelet[2370]: I1112 20:54:15.810801 2370 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:54:15.811941 kubelet[2370]: I1112 20:54:15.811921 2370 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:54:15.812801 kubelet[2370]: I1112 20:54:15.812772 2370 topology_manager.go:215] "Topology Admit Handler" podUID="7bb50464a58021d1857335bbd6dee028" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:54:15.883831 kubelet[2370]: I1112 20:54:15.883784 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:15.883831 kubelet[2370]: I1112 20:54:15.883837 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bb50464a58021d1857335bbd6dee028-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bb50464a58021d1857335bbd6dee028\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:15.884004 kubelet[2370]: I1112 20:54:15.883865 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bb50464a58021d1857335bbd6dee028-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7bb50464a58021d1857335bbd6dee028\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:15.884004 kubelet[2370]: I1112 20:54:15.883889 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:15.884004 kubelet[2370]: I1112 20:54:15.883954 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:15.884004 kubelet[2370]: I1112 20:54:15.883991 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:54:15.884163 kubelet[2370]: I1112 20:54:15.884013 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bb50464a58021d1857335bbd6dee028-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bb50464a58021d1857335bbd6dee028\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:15.884163 kubelet[2370]: I1112 20:54:15.884038 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:15.884163 kubelet[2370]: I1112 20:54:15.884067 2370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:16.116353 kubelet[2370]: E1112 20:54:16.116192 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:16.117318 containerd[1570]: time="2024-11-12T20:54:16.117251427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:16.117845 containerd[1570]: time="2024-11-12T20:54:16.117796959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:16.117874 kubelet[2370]: E1112 20:54:16.117358 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:16.118947 kubelet[2370]: E1112 20:54:16.118930 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:16.119300 containerd[1570]: time="2024-11-12T20:54:16.119247919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7bb50464a58021d1857335bbd6dee028,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:16.368339 kubelet[2370]: E1112 20:54:16.368209 2370 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:17.096337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982576437.mount: Deactivated successfully. Nov 12 20:54:17.101940 containerd[1570]: time="2024-11-12T20:54:17.101889282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:17.102902 containerd[1570]: time="2024-11-12T20:54:17.102863450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:17.103783 containerd[1570]: time="2024-11-12T20:54:17.103754654Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:17.104602 containerd[1570]: time="2024-11-12T20:54:17.104558605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:54:17.105269 containerd[1570]: time="2024-11-12T20:54:17.105242354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:54:17.106606 containerd[1570]: time="2024-11-12T20:54:17.106562293Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 12 20:54:17.107589 containerd[1570]: time="2024-11-12T20:54:17.107547692Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:17.111704 containerd[1570]: time="2024-11-12T20:54:17.111654603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:17.112482 containerd[1570]: time="2024-11-12T20:54:17.112456180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 994.562973ms" Nov 12 20:54:17.113649 containerd[1570]: time="2024-11-12T20:54:17.113604801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 996.217011ms" Nov 12 20:54:17.114775 containerd[1570]: time="2024-11-12T20:54:17.114748113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 995.444781ms" Nov 12 20:54:17.249603 containerd[1570]: time="2024-11-12T20:54:17.249501218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:17.249603 containerd[1570]: time="2024-11-12T20:54:17.249563813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:17.249603 containerd[1570]: time="2024-11-12T20:54:17.249573652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:17.250698 containerd[1570]: time="2024-11-12T20:54:17.250276275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:17.252725 containerd[1570]: time="2024-11-12T20:54:17.251370566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:17.252725 containerd[1570]: time="2024-11-12T20:54:17.251425839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:17.252725 containerd[1570]: time="2024-11-12T20:54:17.251440126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:17.252725 containerd[1570]: time="2024-11-12T20:54:17.251549758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:17.252725 containerd[1570]: time="2024-11-12T20:54:17.252264435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:17.252725 containerd[1570]: time="2024-11-12T20:54:17.252420454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:17.252725 containerd[1570]: time="2024-11-12T20:54:17.252435692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:17.253105 containerd[1570]: time="2024-11-12T20:54:17.253029975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:17.279275 kubelet[2370]: E1112 20:54:17.279050 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="3.2s" Nov 12 20:54:17.318065 containerd[1570]: time="2024-11-12T20:54:17.318012706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:33932df710fd78419c0859d7fa44b8e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3175d25126baf1faa2fd53e806bf54fc85c4cbd7a6dc742c10a7b0c8acdb84aa\"" Nov 12 20:54:17.319299 kubelet[2370]: E1112 20:54:17.319235 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:17.321440 containerd[1570]: time="2024-11-12T20:54:17.321293864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7bb50464a58021d1857335bbd6dee028,Namespace:kube-system,Attempt:0,} returns sandbox id \"936b92544225551917beed4606892caba51a698835eb17c818cd760e321689a2\"" Nov 12 20:54:17.322594 containerd[1570]: time="2024-11-12T20:54:17.322066428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c7145bec6839b5d7dcb0c5beff5515b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"64268a1b81c1b4a86f93551bb8b7feea7fb5751915a8be89ab5b2d00cb4926ab\"" Nov 12 20:54:17.322650 kubelet[2370]: E1112 20:54:17.322262 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:17.323994 containerd[1570]: time="2024-11-12T20:54:17.323954982Z" level=info msg="CreateContainer within sandbox \"3175d25126baf1faa2fd53e806bf54fc85c4cbd7a6dc742c10a7b0c8acdb84aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:54:17.324170 kubelet[2370]: E1112 20:54:17.324149 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:17.326116 containerd[1570]: time="2024-11-12T20:54:17.325459975Z" level=info msg="CreateContainer within sandbox \"936b92544225551917beed4606892caba51a698835eb17c818cd760e321689a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:54:17.327384 containerd[1570]: time="2024-11-12T20:54:17.327267098Z" level=info msg="CreateContainer within sandbox \"64268a1b81c1b4a86f93551bb8b7feea7fb5751915a8be89ab5b2d00cb4926ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:54:17.335200 kubelet[2370]: W1112 20:54:17.335138 2370 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:17.335328 kubelet[2370]: E1112 20:54:17.335317 2370 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Nov 12 20:54:17.346187 containerd[1570]: time="2024-11-12T20:54:17.346148614Z" level=info msg="CreateContainer within sandbox \"3175d25126baf1faa2fd53e806bf54fc85c4cbd7a6dc742c10a7b0c8acdb84aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d5b59e52f7bc929a4de3bebe38288e812d51701d3a2875542cd80e1d5d06aef5\"" Nov 12 20:54:17.347248 containerd[1570]: time="2024-11-12T20:54:17.347172974Z" level=info msg="StartContainer for \"d5b59e52f7bc929a4de3bebe38288e812d51701d3a2875542cd80e1d5d06aef5\"" Nov 12 20:54:17.352934 containerd[1570]: time="2024-11-12T20:54:17.352876578Z" level=info msg="CreateContainer within sandbox \"936b92544225551917beed4606892caba51a698835eb17c818cd760e321689a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42520763dc6411a49dc4abb079c1b56c1f2fcb4daaf4c2fe72a8222587351bd5\"" Nov 12 20:54:17.353830 containerd[1570]: time="2024-11-12T20:54:17.353399098Z" level=info msg="StartContainer for \"42520763dc6411a49dc4abb079c1b56c1f2fcb4daaf4c2fe72a8222587351bd5\"" Nov 12 20:54:17.357130 containerd[1570]: time="2024-11-12T20:54:17.357076592Z" level=info msg="CreateContainer within sandbox \"64268a1b81c1b4a86f93551bb8b7feea7fb5751915a8be89ab5b2d00cb4926ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ebd879a516ad51ead51f5ec7fd7bb740a727842fbf5726d042c2b5311e9a33d5\"" Nov 12 20:54:17.357562 containerd[1570]: time="2024-11-12T20:54:17.357513423Z" level=info msg="StartContainer for \"ebd879a516ad51ead51f5ec7fd7bb740a727842fbf5726d042c2b5311e9a33d5\"" Nov 12 20:54:17.386885 kubelet[2370]: I1112 20:54:17.386841 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:54:17.387942 kubelet[2370]: E1112 20:54:17.387190 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Nov 12 20:54:17.424625 containerd[1570]: time="2024-11-12T20:54:17.424575031Z" level=info msg="StartContainer for \"d5b59e52f7bc929a4de3bebe38288e812d51701d3a2875542cd80e1d5d06aef5\" returns successfully" Nov 12 20:54:17.427794 containerd[1570]: time="2024-11-12T20:54:17.427637603Z" level=info msg="StartContainer for \"ebd879a516ad51ead51f5ec7fd7bb740a727842fbf5726d042c2b5311e9a33d5\" returns successfully" Nov 12 20:54:17.433001 containerd[1570]: time="2024-11-12T20:54:17.432134427Z" level=info msg="StartContainer for \"42520763dc6411a49dc4abb079c1b56c1f2fcb4daaf4c2fe72a8222587351bd5\" returns successfully" Nov 12 20:54:18.321928 kubelet[2370]: E1112 20:54:18.321877 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:18.323253 kubelet[2370]: E1112 20:54:18.323226 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:18.324875 kubelet[2370]: E1112 20:54:18.324852 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:19.124531 kubelet[2370]: E1112 20:54:19.124486 2370 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:54:19.327447 kubelet[2370]: E1112 20:54:19.327393 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:19.327980 kubelet[2370]: E1112 20:54:19.327520 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:19.475088 kubelet[2370]: E1112 20:54:19.474948 2370 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:54:19.928860 kubelet[2370]: E1112 20:54:19.928803 2370 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 12 20:54:20.495221 kubelet[2370]: E1112 20:54:20.495121 2370 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 12 20:54:20.588638 kubelet[2370]: I1112 20:54:20.588603 2370 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:54:20.643237 kubelet[2370]: I1112 20:54:20.643159 2370 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:54:20.649919 kubelet[2370]: E1112 20:54:20.649852 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:20.750493 kubelet[2370]: E1112 20:54:20.750335 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:20.851665 kubelet[2370]: E1112 20:54:20.850996 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:20.952137 kubelet[2370]: E1112 20:54:20.952069 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.052802 kubelet[2370]: E1112 20:54:21.052731 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.153640 kubelet[2370]: E1112 20:54:21.153536 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.253753 kubelet[2370]: E1112 20:54:21.253672 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.268289 kubelet[2370]: E1112 20:54:21.268250 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:21.354950 kubelet[2370]: E1112 20:54:21.354770 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.455546 kubelet[2370]: E1112 20:54:21.455473 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.556194 kubelet[2370]: E1112 20:54:21.556126 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.657123 kubelet[2370]: E1112 20:54:21.656984 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.757884 kubelet[2370]: E1112 20:54:21.757830 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.858599 kubelet[2370]: E1112 20:54:21.858532 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:21.959277 kubelet[2370]: E1112 20:54:21.959124 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.059775 kubelet[2370]: E1112 20:54:22.059709 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.160664 kubelet[2370]: E1112 20:54:22.160589 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.261764 kubelet[2370]: E1112 20:54:22.261594 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.362091 kubelet[2370]: E1112 20:54:22.361995 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.438686 systemd[1]: Reloading requested from client PID 2647 ('systemctl') (unit session-7.scope)... Nov 12 20:54:22.438711 systemd[1]: Reloading... Nov 12 20:54:22.462743 kubelet[2370]: E1112 20:54:22.462694 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.524931 zram_generator::config[2686]: No configuration found. Nov 12 20:54:22.563553 kubelet[2370]: E1112 20:54:22.563490 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.664033 kubelet[2370]: E1112 20:54:22.663974 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.677475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:22.788356 systemd[1]: Reloading finished in 349 ms. Nov 12 20:54:22.872874 kubelet[2370]: E1112 20:54:22.872795 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 12 20:54:22.910663 kubelet[2370]: I1112 20:54:22.910513 2370 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:22.910651 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:22.929193 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:54:22.929719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:22.940375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:23.098156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:23.104778 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:54:23.159954 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:23.159954 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:54:23.159954 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:23.159954 kubelet[2741]: I1112 20:54:23.159659 2741 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:54:23.166055 kubelet[2741]: I1112 20:54:23.166004 2741 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:54:23.166055 kubelet[2741]: I1112 20:54:23.166039 2741 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:54:23.166322 kubelet[2741]: I1112 20:54:23.166298 2741 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:54:23.167949 kubelet[2741]: I1112 20:54:23.167879 2741 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:54:23.169883 kubelet[2741]: I1112 20:54:23.169799 2741 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:23.183555 kubelet[2741]: I1112 20:54:23.183501 2741 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:54:23.184447 kubelet[2741]: I1112 20:54:23.184414 2741 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:54:23.184674 kubelet[2741]: I1112 20:54:23.184639 2741 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:54:23.184674 kubelet[2741]: I1112 20:54:23.184676 2741 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:54:23.184861 kubelet[2741]: I1112 20:54:23.184687 2741 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:54:23.184861 kubelet[2741]: I1112 20:54:23.184722 2741 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:23.184861 kubelet[2741]: I1112 20:54:23.184859 2741 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:54:23.184982 kubelet[2741]: I1112 20:54:23.184882 2741 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:54:23.184982 kubelet[2741]: I1112 20:54:23.184967 2741 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:54:23.185036 kubelet[2741]: I1112 20:54:23.184994 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:54:23.187656 kubelet[2741]: I1112 20:54:23.187604 2741 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:54:23.188178 kubelet[2741]: I1112 20:54:23.187963 2741 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:54:23.188923 kubelet[2741]: I1112 20:54:23.188536 2741 server.go:1256] "Started kubelet" Nov 12 20:54:23.190730 kubelet[2741]: I1112 20:54:23.188968 2741 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:54:23.190730 kubelet[2741]: I1112 20:54:23.189023 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:54:23.190730 kubelet[2741]: I1112 20:54:23.189362 2741 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:54:23.190730 kubelet[2741]: I1112 20:54:23.190242 2741 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:54:23.191980 kubelet[2741]: I1112 20:54:23.191944 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:54:23.197431 kubelet[2741]: I1112 20:54:23.197386 2741 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:54:23.197800 kubelet[2741]: I1112 20:54:23.197750 2741 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:54:23.198093 kubelet[2741]: I1112 20:54:23.198064 2741 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:54:23.218774 kubelet[2741]: I1112 20:54:23.214439 2741 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:54:23.222017 kubelet[2741]: I1112 20:54:23.221678 2741 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:54:23.222017 kubelet[2741]: I1112 20:54:23.221700 2741 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:54:23.223011 kubelet[2741]: E1112 20:54:23.222973 2741 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:54:23.226537 kubelet[2741]: I1112 20:54:23.226513 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:54:23.227826 kubelet[2741]: I1112 20:54:23.227789 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:54:23.227826 kubelet[2741]: I1112 20:54:23.227824 2741 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:54:23.227901 kubelet[2741]: I1112 20:54:23.227851 2741 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:54:23.228003 kubelet[2741]: E1112 20:54:23.227984 2741 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:54:23.277317 kubelet[2741]: I1112 20:54:23.277280 2741 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:54:23.277499 kubelet[2741]: I1112 20:54:23.277466 2741 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:54:23.277499 kubelet[2741]: I1112 20:54:23.277497 2741 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:23.277735 kubelet[2741]: I1112 20:54:23.277711 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:54:23.277781 kubelet[2741]: I1112 20:54:23.277743 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:54:23.277781 kubelet[2741]: I1112 20:54:23.277752 2741 policy_none.go:49] "None policy: Start" Nov 12 20:54:23.278366 kubelet[2741]: I1112 20:54:23.278318 2741 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:54:23.278366 kubelet[2741]: I1112 20:54:23.278347 2741 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:54:23.278542 kubelet[2741]: I1112 20:54:23.278513 2741 state_mem.go:75] "Updated machine memory state" Nov 12 20:54:23.280483 kubelet[2741]: I1112 20:54:23.280258 2741 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:54:23.280554 kubelet[2741]: I1112 20:54:23.280544 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:54:23.302528 kubelet[2741]: I1112 20:54:23.302479 2741 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Nov 12 20:54:23.310260 kubelet[2741]: I1112 20:54:23.309773 2741 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Nov 12 20:54:23.310260 kubelet[2741]: I1112 20:54:23.309864 2741 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Nov 12 20:54:23.328796 kubelet[2741]: I1112 20:54:23.328740 2741 topology_manager.go:215] "Topology Admit Handler" podUID="33932df710fd78419c0859d7fa44b8e7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Nov 12 20:54:23.328957 kubelet[2741]: I1112 20:54:23.328822 2741 topology_manager.go:215] "Topology Admit Handler" podUID="c7145bec6839b5d7dcb0c5beff5515b4" podNamespace="kube-system" podName="kube-scheduler-localhost" Nov 12 20:54:23.328957 kubelet[2741]: I1112 20:54:23.328848 2741 topology_manager.go:215] "Topology Admit Handler" podUID="7bb50464a58021d1857335bbd6dee028" podNamespace="kube-system" podName="kube-apiserver-localhost" Nov 12 20:54:23.499224 kubelet[2741]: I1112 20:54:23.499072 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:23.499224 kubelet[2741]: I1112 20:54:23.499128 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:23.499224 kubelet[2741]: I1112 20:54:23.499154 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bb50464a58021d1857335bbd6dee028-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bb50464a58021d1857335bbd6dee028\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:23.499224 kubelet[2741]: I1112 20:54:23.499173 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:23.499464 kubelet[2741]: I1112 20:54:23.499266 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:23.499464 kubelet[2741]: I1112 20:54:23.499320 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7145bec6839b5d7dcb0c5beff5515b4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c7145bec6839b5d7dcb0c5beff5515b4\") " pod="kube-system/kube-scheduler-localhost" Nov 12 20:54:23.499464 kubelet[2741]: I1112 20:54:23.499342 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bb50464a58021d1857335bbd6dee028-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bb50464a58021d1857335bbd6dee028\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:23.499464 kubelet[2741]: I1112 20:54:23.499386 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bb50464a58021d1857335bbd6dee028-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7bb50464a58021d1857335bbd6dee028\") " pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:23.499464 kubelet[2741]: I1112 20:54:23.499408 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33932df710fd78419c0859d7fa44b8e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"33932df710fd78419c0859d7fa44b8e7\") " pod="kube-system/kube-controller-manager-localhost" Nov 12 20:54:23.647071 kubelet[2741]: E1112 20:54:23.646961 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:23.647071 kubelet[2741]: E1112 20:54:23.647043 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:23.647276 kubelet[2741]: E1112 20:54:23.647231 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:24.186146 kubelet[2741]: I1112 20:54:24.186088 2741 apiserver.go:52] "Watching apiserver" Nov 12 20:54:24.198551 kubelet[2741]: I1112 20:54:24.198493 2741 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:54:24.247317 kubelet[2741]: E1112 20:54:24.247010 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:24.250242 kubelet[2741]: E1112 20:54:24.250212 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:24.260858 kubelet[2741]: E1112 20:54:24.259093 2741 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 12 20:54:24.260858 kubelet[2741]: E1112 20:54:24.259806 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:24.298934 kubelet[2741]: I1112 20:54:24.297511 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.295586863 podStartE2EDuration="1.295586863s" podCreationTimestamp="2024-11-12 20:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:24.283223326 +0000 UTC m=+1.173728981" watchObservedRunningTime="2024-11-12 20:54:24.295586863 +0000 UTC m=+1.186092518" Nov 12 20:54:24.304275 kubelet[2741]: I1112 20:54:24.304215 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.304162472 podStartE2EDuration="1.304162472s" podCreationTimestamp="2024-11-12 20:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:24.295512435 +0000 UTC m=+1.186018090" watchObservedRunningTime="2024-11-12 20:54:24.304162472 +0000 UTC m=+1.194668127" Nov 12 20:54:24.327643 kubelet[2741]: I1112 20:54:24.327584 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.327531976 podStartE2EDuration="1.327531976s" podCreationTimestamp="2024-11-12 20:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:24.304962793 +0000 UTC m=+1.195468448" watchObservedRunningTime="2024-11-12 20:54:24.327531976 +0000 UTC m=+1.218037631" Nov 12 20:54:25.248783 kubelet[2741]: E1112 20:54:25.248740 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:26.644816 kubelet[2741]: E1112 20:54:26.644751 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:28.250386 update_engine[1553]: I20241112 20:54:28.250247 1553 update_attempter.cc:509] Updating boot flags... Nov 12 20:54:28.508960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2817) Nov 12 20:54:28.565534 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2816) Nov 12 20:54:28.610939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2816) Nov 12 20:54:28.643359 sudo[1773]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:28.647108 sshd[1766]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:28.651053 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:45064.service: Deactivated successfully. Nov 12 20:54:28.654411 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:54:28.655368 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:54:28.656592 systemd-logind[1552]: Removed session 7. Nov 12 20:54:32.843703 kubelet[2741]: E1112 20:54:32.843635 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:33.079724 kubelet[2741]: E1112 20:54:33.079667 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:33.261387 kubelet[2741]: E1112 20:54:33.261082 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:33.261387 kubelet[2741]: E1112 20:54:33.261325 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:34.105788 kubelet[2741]: I1112 20:54:34.105754 2741 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:54:34.106417 kubelet[2741]: I1112 20:54:34.106268 2741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:54:34.106472 containerd[1570]: time="2024-11-12T20:54:34.106104844Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:54:35.015403 kubelet[2741]: I1112 20:54:35.015356 2741 topology_manager.go:215] "Topology Admit Handler" podUID="e6967af9-92f8-4136-bf62-3fb356772034" podNamespace="kube-system" podName="kube-proxy-pll4c" Nov 12 20:54:35.073975 kubelet[2741]: I1112 20:54:35.073878 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6967af9-92f8-4136-bf62-3fb356772034-xtables-lock\") pod \"kube-proxy-pll4c\" (UID: \"e6967af9-92f8-4136-bf62-3fb356772034\") " pod="kube-system/kube-proxy-pll4c" Nov 12 20:54:35.073975 kubelet[2741]: I1112 20:54:35.073977 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6967af9-92f8-4136-bf62-3fb356772034-lib-modules\") pod \"kube-proxy-pll4c\" (UID: \"e6967af9-92f8-4136-bf62-3fb356772034\") " pod="kube-system/kube-proxy-pll4c" Nov 12 20:54:35.074230 kubelet[2741]: I1112 20:54:35.074009 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6967af9-92f8-4136-bf62-3fb356772034-kube-proxy\") pod \"kube-proxy-pll4c\" (UID: \"e6967af9-92f8-4136-bf62-3fb356772034\") " pod="kube-system/kube-proxy-pll4c" Nov 12 20:54:35.074230 kubelet[2741]: I1112 20:54:35.074042 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgxrc\" (UniqueName: \"kubernetes.io/projected/e6967af9-92f8-4136-bf62-3fb356772034-kube-api-access-pgxrc\") pod \"kube-proxy-pll4c\" (UID: \"e6967af9-92f8-4136-bf62-3fb356772034\") " pod="kube-system/kube-proxy-pll4c" Nov 12 20:54:35.144939 kubelet[2741]: I1112 20:54:35.140432 2741 topology_manager.go:215] "Topology Admit Handler" podUID="a1315314-cee8-4ce0-b24b-7613c3eab03f" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-96zzj" Nov 12 20:54:35.174578 kubelet[2741]: I1112 20:54:35.174514 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1315314-cee8-4ce0-b24b-7613c3eab03f-var-lib-calico\") pod \"tigera-operator-56b74f76df-96zzj\" (UID: \"a1315314-cee8-4ce0-b24b-7613c3eab03f\") " pod="tigera-operator/tigera-operator-56b74f76df-96zzj" Nov 12 20:54:35.174578 kubelet[2741]: I1112 20:54:35.174598 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrwb\" (UniqueName: \"kubernetes.io/projected/a1315314-cee8-4ce0-b24b-7613c3eab03f-kube-api-access-9rrwb\") pod \"tigera-operator-56b74f76df-96zzj\" (UID: \"a1315314-cee8-4ce0-b24b-7613c3eab03f\") " pod="tigera-operator/tigera-operator-56b74f76df-96zzj" Nov 12 20:54:35.321140 kubelet[2741]: E1112 20:54:35.321059 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:35.322332 containerd[1570]: time="2024-11-12T20:54:35.322278392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pll4c,Uid:e6967af9-92f8-4136-bf62-3fb356772034,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:35.347835 containerd[1570]: time="2024-11-12T20:54:35.346875873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:35.347835 containerd[1570]: time="2024-11-12T20:54:35.347642386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:35.347835 containerd[1570]: time="2024-11-12T20:54:35.347660169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:35.347835 containerd[1570]: time="2024-11-12T20:54:35.347800482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:35.387977 containerd[1570]: time="2024-11-12T20:54:35.387933107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pll4c,Uid:e6967af9-92f8-4136-bf62-3fb356772034,Namespace:kube-system,Attempt:0,} returns sandbox id \"03baffb3f45d72f6978724227ab872eafddf68d9a59eb2c511721f1115c75bea\"" Nov 12 20:54:35.388735 kubelet[2741]: E1112 20:54:35.388708 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:35.391020 containerd[1570]: time="2024-11-12T20:54:35.390892741Z" level=info msg="CreateContainer within sandbox \"03baffb3f45d72f6978724227ab872eafddf68d9a59eb2c511721f1115c75bea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:54:35.409560 containerd[1570]: time="2024-11-12T20:54:35.409495942Z" level=info msg="CreateContainer within sandbox \"03baffb3f45d72f6978724227ab872eafddf68d9a59eb2c511721f1115c75bea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed37e2a16c6078aadf4f9391bd969eae0079289d7a0ffe8b520988f3abb4e21d\"" Nov 12 20:54:35.410272 containerd[1570]: time="2024-11-12T20:54:35.410238009Z" level=info msg="StartContainer for \"ed37e2a16c6078aadf4f9391bd969eae0079289d7a0ffe8b520988f3abb4e21d\"" Nov 12 20:54:35.450702 containerd[1570]: time="2024-11-12T20:54:35.450648974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-96zzj,Uid:a1315314-cee8-4ce0-b24b-7613c3eab03f,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:54:35.481610 containerd[1570]: time="2024-11-12T20:54:35.481565943Z" level=info msg="StartContainer for \"ed37e2a16c6078aadf4f9391bd969eae0079289d7a0ffe8b520988f3abb4e21d\" returns successfully" Nov 12 20:54:35.484679 containerd[1570]: time="2024-11-12T20:54:35.484488178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:35.484679 containerd[1570]: time="2024-11-12T20:54:35.484601771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:35.484679 containerd[1570]: time="2024-11-12T20:54:35.484623051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:35.484902 containerd[1570]: time="2024-11-12T20:54:35.484750179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:35.548507 containerd[1570]: time="2024-11-12T20:54:35.548446360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-96zzj,Uid:a1315314-cee8-4ce0-b24b-7613c3eab03f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6db0cafee42ca1dbc013560e5924ac336bbd3bc4143ef756a8bfd5d0d75ad966\"" Nov 12 20:54:35.551349 containerd[1570]: time="2024-11-12T20:54:35.550883457Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:54:36.268086 kubelet[2741]: E1112 20:54:36.268041 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:36.328512 kubelet[2741]: I1112 20:54:36.328454 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pll4c" podStartSLOduration=1.328389945 podStartE2EDuration="1.328389945s" podCreationTimestamp="2024-11-12 20:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:36.328133706 +0000 UTC m=+13.218639371" watchObservedRunningTime="2024-11-12 20:54:36.328389945 +0000 UTC m=+13.218895600" Nov 12 20:54:36.649055 kubelet[2741]: E1112 20:54:36.649017 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:40.169522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660558478.mount: Deactivated successfully. Nov 12 20:54:42.432665 containerd[1570]: time="2024-11-12T20:54:42.432220805Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:42.461877 containerd[1570]: time="2024-11-12T20:54:42.461784565Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763375" Nov 12 20:54:42.510341 containerd[1570]: time="2024-11-12T20:54:42.510274355Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:42.583353 containerd[1570]: time="2024-11-12T20:54:42.583282648Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:42.584112 containerd[1570]: time="2024-11-12T20:54:42.584079520Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 7.033101907s" Nov 12 20:54:42.584174 containerd[1570]: time="2024-11-12T20:54:42.584114986Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:54:42.585795 containerd[1570]: time="2024-11-12T20:54:42.585771297Z" level=info msg="CreateContainer within sandbox \"6db0cafee42ca1dbc013560e5924ac336bbd3bc4143ef756a8bfd5d0d75ad966\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:54:43.019602 containerd[1570]: time="2024-11-12T20:54:43.019536463Z" level=info msg="CreateContainer within sandbox \"6db0cafee42ca1dbc013560e5924ac336bbd3bc4143ef756a8bfd5d0d75ad966\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0241dacad44b3a5e338e1ecb567f0ce7f27e247914a067c591f6b897c4013634\"" Nov 12 20:54:43.020164 containerd[1570]: time="2024-11-12T20:54:43.020132540Z" level=info msg="StartContainer for \"0241dacad44b3a5e338e1ecb567f0ce7f27e247914a067c591f6b897c4013634\"" Nov 12 20:54:43.158316 containerd[1570]: time="2024-11-12T20:54:43.158258085Z" level=info msg="StartContainer for \"0241dacad44b3a5e338e1ecb567f0ce7f27e247914a067c591f6b897c4013634\" returns successfully" Nov 12 20:54:43.375460 kubelet[2741]: I1112 20:54:43.374767 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-96zzj" podStartSLOduration=1.339775426 podStartE2EDuration="8.374611787s" podCreationTimestamp="2024-11-12 20:54:35 +0000 UTC" firstStartedPulling="2024-11-12 20:54:35.549654559 +0000 UTC m=+12.440160214" lastFinishedPulling="2024-11-12 20:54:42.58449093 +0000 UTC m=+19.474996575" observedRunningTime="2024-11-12 20:54:43.374312115 +0000 UTC m=+20.264817770" watchObservedRunningTime="2024-11-12 20:54:43.374611787 +0000 UTC m=+20.265117442" Nov 12 20:54:46.124666 kubelet[2741]: I1112 20:54:46.124513 2741 topology_manager.go:215] "Topology Admit Handler" podUID="8640138f-01e9-491c-b3fb-2790f3710326" podNamespace="calico-system" podName="calico-typha-585d95df97-9n7tm" Nov 12 20:54:46.177156 kubelet[2741]: I1112 20:54:46.177109 2741 topology_manager.go:215] "Topology Admit Handler" podUID="ab60310d-7ef7-4bb9-b5dc-e48346eed3df" podNamespace="calico-system" podName="calico-node-95mnj" Nov 12 20:54:46.236186 kubelet[2741]: I1112 20:54:46.236139 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-lib-modules\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236186 kubelet[2741]: I1112 20:54:46.236189 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rl5\" (UniqueName: \"kubernetes.io/projected/8640138f-01e9-491c-b3fb-2790f3710326-kube-api-access-62rl5\") pod \"calico-typha-585d95df97-9n7tm\" (UID: \"8640138f-01e9-491c-b3fb-2790f3710326\") " pod="calico-system/calico-typha-585d95df97-9n7tm" Nov 12 20:54:46.236373 kubelet[2741]: I1112 20:54:46.236216 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-xtables-lock\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236373 kubelet[2741]: I1112 20:54:46.236292 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-cni-log-dir\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236373 kubelet[2741]: I1112 20:54:46.236338 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-cni-bin-dir\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236373 kubelet[2741]: I1112 20:54:46.236368 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-cni-net-dir\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236530 kubelet[2741]: I1112 20:54:46.236395 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-var-lib-calico\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236530 kubelet[2741]: I1112 20:54:46.236434 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8640138f-01e9-491c-b3fb-2790f3710326-typha-certs\") pod \"calico-typha-585d95df97-9n7tm\" (UID: \"8640138f-01e9-491c-b3fb-2790f3710326\") " pod="calico-system/calico-typha-585d95df97-9n7tm" Nov 12 20:54:46.236530 kubelet[2741]: I1112 20:54:46.236488 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8640138f-01e9-491c-b3fb-2790f3710326-tigera-ca-bundle\") pod \"calico-typha-585d95df97-9n7tm\" (UID: \"8640138f-01e9-491c-b3fb-2790f3710326\") " pod="calico-system/calico-typha-585d95df97-9n7tm" Nov 12 20:54:46.236530 kubelet[2741]: I1112 20:54:46.236528 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-policysync\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236671 kubelet[2741]: I1112 20:54:46.236579 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-tigera-ca-bundle\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236671 kubelet[2741]: I1112 20:54:46.236618 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-node-certs\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236671 kubelet[2741]: I1112 20:54:46.236646 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-var-run-calico\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236671 kubelet[2741]: I1112 20:54:46.236672 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-flexvol-driver-host\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.236785 kubelet[2741]: I1112 20:54:46.236709 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppstq\" (UniqueName: \"kubernetes.io/projected/ab60310d-7ef7-4bb9-b5dc-e48346eed3df-kube-api-access-ppstq\") pod \"calico-node-95mnj\" (UID: \"ab60310d-7ef7-4bb9-b5dc-e48346eed3df\") " pod="calico-system/calico-node-95mnj" Nov 12 20:54:46.280408 kubelet[2741]: I1112 20:54:46.280354 2741 topology_manager.go:215] "Topology Admit Handler" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" podNamespace="calico-system" podName="csi-node-driver-ll6wf" Nov 12 20:54:46.281879 kubelet[2741]: E1112 20:54:46.281846 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:54:46.337938 kubelet[2741]: I1112 20:54:46.337018 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqm2\" (UniqueName: \"kubernetes.io/projected/3131bc61-5520-4f07-bd62-766f60d48de0-kube-api-access-dnqm2\") pod \"csi-node-driver-ll6wf\" (UID: \"3131bc61-5520-4f07-bd62-766f60d48de0\") " pod="calico-system/csi-node-driver-ll6wf" Nov 12 20:54:46.337938 kubelet[2741]: I1112 20:54:46.337085 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3131bc61-5520-4f07-bd62-766f60d48de0-socket-dir\") pod \"csi-node-driver-ll6wf\" (UID: \"3131bc61-5520-4f07-bd62-766f60d48de0\") " pod="calico-system/csi-node-driver-ll6wf" Nov 12 20:54:46.337938 kubelet[2741]: I1112 20:54:46.337154 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3131bc61-5520-4f07-bd62-766f60d48de0-kubelet-dir\") pod \"csi-node-driver-ll6wf\" (UID: \"3131bc61-5520-4f07-bd62-766f60d48de0\") " pod="calico-system/csi-node-driver-ll6wf" Nov 12 20:54:46.337938 kubelet[2741]: I1112 20:54:46.337174 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3131bc61-5520-4f07-bd62-766f60d48de0-varrun\") pod \"csi-node-driver-ll6wf\" (UID: \"3131bc61-5520-4f07-bd62-766f60d48de0\") " pod="calico-system/csi-node-driver-ll6wf" Nov 12 20:54:46.337938 kubelet[2741]: I1112 20:54:46.337193 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3131bc61-5520-4f07-bd62-766f60d48de0-registration-dir\") pod \"csi-node-driver-ll6wf\" (UID: \"3131bc61-5520-4f07-bd62-766f60d48de0\") " pod="calico-system/csi-node-driver-ll6wf" Nov 12 20:54:46.365279 kubelet[2741]: E1112 20:54:46.365217 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.365279 kubelet[2741]: W1112 20:54:46.365271 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.365633 kubelet[2741]: E1112 20:54:46.365316 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.365740 kubelet[2741]: E1112 20:54:46.365719 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.365798 kubelet[2741]: W1112 20:54:46.365784 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.366061 kubelet[2741]: E1112 20:54:46.365849 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.367084 kubelet[2741]: E1112 20:54:46.367069 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.367165 kubelet[2741]: W1112 20:54:46.367149 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.367222 kubelet[2741]: E1112 20:54:46.367213 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.371080 kubelet[2741]: E1112 20:54:46.371020 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.371080 kubelet[2741]: W1112 20:54:46.371036 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.371080 kubelet[2741]: E1112 20:54:46.371051 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.438320 kubelet[2741]: E1112 20:54:46.438184 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.438320 kubelet[2741]: W1112 20:54:46.438214 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.438320 kubelet[2741]: E1112 20:54:46.438256 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.438610 kubelet[2741]: E1112 20:54:46.438571 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:46.438881 kubelet[2741]: E1112 20:54:46.438855 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.438933 kubelet[2741]: W1112 20:54:46.438900 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.438988 kubelet[2741]: E1112 20:54:46.438951 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.439593 kubelet[2741]: E1112 20:54:46.439265 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.439593 kubelet[2741]: W1112 20:54:46.439294 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.439593 kubelet[2741]: E1112 20:54:46.439343 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.440237 containerd[1570]: time="2024-11-12T20:54:46.439804227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-585d95df97-9n7tm,Uid:8640138f-01e9-491c-b3fb-2790f3710326,Namespace:calico-system,Attempt:0,}" Nov 12 20:54:46.441137 kubelet[2741]: E1112 20:54:46.440754 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.441137 kubelet[2741]: W1112 20:54:46.440779 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.441137 kubelet[2741]: E1112 20:54:46.440835 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.441581 kubelet[2741]: E1112 20:54:46.441554 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.441581 kubelet[2741]: W1112 20:54:46.441573 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.441668 kubelet[2741]: E1112 20:54:46.441596 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.442649 kubelet[2741]: E1112 20:54:46.442603 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.442649 kubelet[2741]: W1112 20:54:46.442626 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.442814 kubelet[2741]: E1112 20:54:46.442658 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.443095 kubelet[2741]: E1112 20:54:46.443062 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.443095 kubelet[2741]: W1112 20:54:46.443092 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.443179 kubelet[2741]: E1112 20:54:46.443127 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.443422 kubelet[2741]: E1112 20:54:46.443393 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.443422 kubelet[2741]: W1112 20:54:46.443408 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.443557 kubelet[2741]: E1112 20:54:46.443434 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.443737 kubelet[2741]: E1112 20:54:46.443718 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.443737 kubelet[2741]: W1112 20:54:46.443734 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.443803 kubelet[2741]: E1112 20:54:46.443756 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.444317 kubelet[2741]: E1112 20:54:46.444140 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.444635 kubelet[2741]: W1112 20:54:46.444196 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.444635 kubelet[2741]: E1112 20:54:46.444554 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.444974 kubelet[2741]: E1112 20:54:46.444856 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.444974 kubelet[2741]: W1112 20:54:46.444873 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.444974 kubelet[2741]: E1112 20:54:46.444939 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.445396 kubelet[2741]: E1112 20:54:46.445371 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.445396 kubelet[2741]: W1112 20:54:46.445391 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.445547 kubelet[2741]: E1112 20:54:46.445446 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.445816 kubelet[2741]: E1112 20:54:46.445786 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.445816 kubelet[2741]: W1112 20:54:46.445804 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.445816 kubelet[2741]: E1112 20:54:46.445831 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.446186 kubelet[2741]: E1112 20:54:46.446160 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.446186 kubelet[2741]: W1112 20:54:46.446174 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.446292 kubelet[2741]: E1112 20:54:46.446194 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.447107 kubelet[2741]: E1112 20:54:46.446500 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.447107 kubelet[2741]: W1112 20:54:46.446529 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.447107 kubelet[2741]: E1112 20:54:46.446585 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.447107 kubelet[2741]: E1112 20:54:46.446867 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.447107 kubelet[2741]: W1112 20:54:46.446878 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.447107 kubelet[2741]: E1112 20:54:46.447009 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.448080 kubelet[2741]: E1112 20:54:46.447323 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.448080 kubelet[2741]: W1112 20:54:46.447336 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.448080 kubelet[2741]: E1112 20:54:46.447396 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.448080 kubelet[2741]: E1112 20:54:46.447639 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.448080 kubelet[2741]: W1112 20:54:46.447653 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.448080 kubelet[2741]: E1112 20:54:46.447679 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.448080 kubelet[2741]: E1112 20:54:46.448041 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.448080 kubelet[2741]: W1112 20:54:46.448051 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.448080 kubelet[2741]: E1112 20:54:46.448068 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.448432 kubelet[2741]: E1112 20:54:46.448357 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.448432 kubelet[2741]: W1112 20:54:46.448369 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.448432 kubelet[2741]: E1112 20:54:46.448393 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.448672 kubelet[2741]: E1112 20:54:46.448645 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.448672 kubelet[2741]: W1112 20:54:46.448658 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.448809 kubelet[2741]: E1112 20:54:46.448739 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.449022 kubelet[2741]: E1112 20:54:46.448855 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.449022 kubelet[2741]: W1112 20:54:46.449018 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.449200 kubelet[2741]: E1112 20:54:46.449065 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.449734 kubelet[2741]: E1112 20:54:46.449388 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.449734 kubelet[2741]: W1112 20:54:46.449437 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.449734 kubelet[2741]: E1112 20:54:46.449467 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.449888 kubelet[2741]: E1112 20:54:46.449855 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.449985 kubelet[2741]: W1112 20:54:46.449966 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.450032 kubelet[2741]: E1112 20:54:46.449994 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.450379 kubelet[2741]: E1112 20:54:46.450348 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.450379 kubelet[2741]: W1112 20:54:46.450371 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.450379 kubelet[2741]: E1112 20:54:46.450384 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.464898 kubelet[2741]: E1112 20:54:46.464860 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:46.464898 kubelet[2741]: W1112 20:54:46.464887 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:46.465311 kubelet[2741]: E1112 20:54:46.464995 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:46.478750 containerd[1570]: time="2024-11-12T20:54:46.478437117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:46.478750 containerd[1570]: time="2024-11-12T20:54:46.478543146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:46.478750 containerd[1570]: time="2024-11-12T20:54:46.478555059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.478750 containerd[1570]: time="2024-11-12T20:54:46.478661157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.482369 kubelet[2741]: E1112 20:54:46.481864 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:46.483012 containerd[1570]: time="2024-11-12T20:54:46.482945493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-95mnj,Uid:ab60310d-7ef7-4bb9-b5dc-e48346eed3df,Namespace:calico-system,Attempt:0,}" Nov 12 20:54:46.534188 containerd[1570]: time="2024-11-12T20:54:46.533956674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:46.534497 containerd[1570]: time="2024-11-12T20:54:46.534362194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:46.537106 containerd[1570]: time="2024-11-12T20:54:46.536690967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.537106 containerd[1570]: time="2024-11-12T20:54:46.536826351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:46.568425 containerd[1570]: time="2024-11-12T20:54:46.568382352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-585d95df97-9n7tm,Uid:8640138f-01e9-491c-b3fb-2790f3710326,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f1a28035688ae211aacdf859bb566ccb27cfa3abb50b556a3afdc3736e6f1c8\"" Nov 12 20:54:46.571690 kubelet[2741]: E1112 20:54:46.571608 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:46.581865 containerd[1570]: time="2024-11-12T20:54:46.581283502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:54:46.601830 containerd[1570]: time="2024-11-12T20:54:46.601747111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-95mnj,Uid:ab60310d-7ef7-4bb9-b5dc-e48346eed3df,Namespace:calico-system,Attempt:0,} returns sandbox id \"14b22e1448a7088e9796bad48b115a539e74545440272f1a7bcbc9f1a851c16d\"" Nov 12 20:54:46.602504 kubelet[2741]: E1112 20:54:46.602481 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:48.228726 kubelet[2741]: E1112 20:54:48.228671 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:54:50.011303 containerd[1570]: time="2024-11-12T20:54:50.011205126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:50.012281 containerd[1570]: time="2024-11-12T20:54:50.012239685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:54:50.014062 containerd[1570]: time="2024-11-12T20:54:50.014011136Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:50.016364 containerd[1570]: time="2024-11-12T20:54:50.016331144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:50.016880 containerd[1570]: time="2024-11-12T20:54:50.016845899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 3.435517252s" Nov 12 20:54:50.016943 containerd[1570]: time="2024-11-12T20:54:50.016880143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:54:50.017510 containerd[1570]: time="2024-11-12T20:54:50.017479406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:54:50.026073 containerd[1570]: time="2024-11-12T20:54:50.025464452Z" level=info msg="CreateContainer within sandbox \"7f1a28035688ae211aacdf859bb566ccb27cfa3abb50b556a3afdc3736e6f1c8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:54:50.045034 containerd[1570]: time="2024-11-12T20:54:50.044985480Z" level=info msg="CreateContainer within sandbox \"7f1a28035688ae211aacdf859bb566ccb27cfa3abb50b556a3afdc3736e6f1c8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"20a113c7abfad01030e5c5821ec6380c91ee5e9007ca65bb9b298768dc3d8a81\"" Nov 12 20:54:50.045517 containerd[1570]: time="2024-11-12T20:54:50.045490135Z" level=info msg="StartContainer for \"20a113c7abfad01030e5c5821ec6380c91ee5e9007ca65bb9b298768dc3d8a81\"" Nov 12 20:54:50.118981 containerd[1570]: time="2024-11-12T20:54:50.118827098Z" level=info msg="StartContainer for \"20a113c7abfad01030e5c5821ec6380c91ee5e9007ca65bb9b298768dc3d8a81\" returns successfully" Nov 12 20:54:50.228529 kubelet[2741]: E1112 20:54:50.228445 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:54:50.303619 kubelet[2741]: E1112 20:54:50.303563 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:50.340449 kubelet[2741]: I1112 20:54:50.340330 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-585d95df97-9n7tm" podStartSLOduration=0.903673937 podStartE2EDuration="4.340283076s" podCreationTimestamp="2024-11-12 20:54:46 +0000 UTC" firstStartedPulling="2024-11-12 20:54:46.58066332 +0000 UTC m=+23.471168975" lastFinishedPulling="2024-11-12 20:54:50.017272448 +0000 UTC m=+26.907778114" observedRunningTime="2024-11-12 20:54:50.339737683 +0000 UTC m=+27.230243338" watchObservedRunningTime="2024-11-12 20:54:50.340283076 +0000 UTC m=+27.230788731" Nov 12 20:54:50.349481 kubelet[2741]: E1112 20:54:50.349443 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.349481 kubelet[2741]: W1112 20:54:50.349472 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.349616 kubelet[2741]: E1112 20:54:50.349493 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.349786 kubelet[2741]: E1112 20:54:50.349763 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.349786 kubelet[2741]: W1112 20:54:50.349776 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.349845 kubelet[2741]: E1112 20:54:50.349790 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.350041 kubelet[2741]: E1112 20:54:50.350015 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.350041 kubelet[2741]: W1112 20:54:50.350026 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.350041 kubelet[2741]: E1112 20:54:50.350036 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.350306 kubelet[2741]: E1112 20:54:50.350293 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.350341 kubelet[2741]: W1112 20:54:50.350311 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.350341 kubelet[2741]: E1112 20:54:50.350333 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.350583 kubelet[2741]: E1112 20:54:50.350567 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.350583 kubelet[2741]: W1112 20:54:50.350580 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.350653 kubelet[2741]: E1112 20:54:50.350591 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.350840 kubelet[2741]: E1112 20:54:50.350822 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.350840 kubelet[2741]: W1112 20:54:50.350838 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.350947 kubelet[2741]: E1112 20:54:50.350849 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.351089 kubelet[2741]: E1112 20:54:50.351062 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.351089 kubelet[2741]: W1112 20:54:50.351087 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.351169 kubelet[2741]: E1112 20:54:50.351101 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.351611 kubelet[2741]: E1112 20:54:50.351392 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.351611 kubelet[2741]: W1112 20:54:50.351426 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.351611 kubelet[2741]: E1112 20:54:50.351456 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.351933 kubelet[2741]: E1112 20:54:50.351888 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.352093 kubelet[2741]: W1112 20:54:50.352011 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.352093 kubelet[2741]: E1112 20:54:50.352031 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.352380 kubelet[2741]: E1112 20:54:50.352350 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.352380 kubelet[2741]: W1112 20:54:50.352362 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.352380 kubelet[2741]: E1112 20:54:50.352374 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.352645 kubelet[2741]: E1112 20:54:50.352584 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.352645 kubelet[2741]: W1112 20:54:50.352592 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.352645 kubelet[2741]: E1112 20:54:50.352610 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.352825 kubelet[2741]: E1112 20:54:50.352806 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.352825 kubelet[2741]: W1112 20:54:50.352819 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.352898 kubelet[2741]: E1112 20:54:50.352833 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.353159 kubelet[2741]: E1112 20:54:50.353117 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.353159 kubelet[2741]: W1112 20:54:50.353149 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.353251 kubelet[2741]: E1112 20:54:50.353181 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.353520 kubelet[2741]: E1112 20:54:50.353498 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.353520 kubelet[2741]: W1112 20:54:50.353513 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.353576 kubelet[2741]: E1112 20:54:50.353527 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.353792 kubelet[2741]: E1112 20:54:50.353761 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.353792 kubelet[2741]: W1112 20:54:50.353776 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.353792 kubelet[2741]: E1112 20:54:50.353789 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.373369 kubelet[2741]: E1112 20:54:50.373327 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.373369 kubelet[2741]: W1112 20:54:50.373353 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.373532 kubelet[2741]: E1112 20:54:50.373377 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.373735 kubelet[2741]: E1112 20:54:50.373712 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.373735 kubelet[2741]: W1112 20:54:50.373728 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.373793 kubelet[2741]: E1112 20:54:50.373758 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.374096 kubelet[2741]: E1112 20:54:50.374079 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.374096 kubelet[2741]: W1112 20:54:50.374093 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.374154 kubelet[2741]: E1112 20:54:50.374112 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.374378 kubelet[2741]: E1112 20:54:50.374362 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.374378 kubelet[2741]: W1112 20:54:50.374375 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.374464 kubelet[2741]: E1112 20:54:50.374396 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.374646 kubelet[2741]: E1112 20:54:50.374629 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.374646 kubelet[2741]: W1112 20:54:50.374644 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.374704 kubelet[2741]: E1112 20:54:50.374677 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.374875 kubelet[2741]: E1112 20:54:50.374858 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.374875 kubelet[2741]: W1112 20:54:50.374872 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.374965 kubelet[2741]: E1112 20:54:50.374930 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.375154 kubelet[2741]: E1112 20:54:50.375138 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.375154 kubelet[2741]: W1112 20:54:50.375150 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.375208 kubelet[2741]: E1112 20:54:50.375171 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.375411 kubelet[2741]: E1112 20:54:50.375387 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.375411 kubelet[2741]: W1112 20:54:50.375400 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.375480 kubelet[2741]: E1112 20:54:50.375426 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.375655 kubelet[2741]: E1112 20:54:50.375641 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.375655 kubelet[2741]: W1112 20:54:50.375653 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.375700 kubelet[2741]: E1112 20:54:50.375672 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.376132 kubelet[2741]: E1112 20:54:50.376100 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.376272 kubelet[2741]: W1112 20:54:50.376132 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.376272 kubelet[2741]: E1112 20:54:50.376164 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.376413 kubelet[2741]: E1112 20:54:50.376392 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.376413 kubelet[2741]: W1112 20:54:50.376410 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.376461 kubelet[2741]: E1112 20:54:50.376442 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.376662 kubelet[2741]: E1112 20:54:50.376644 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.376662 kubelet[2741]: W1112 20:54:50.376656 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.376729 kubelet[2741]: E1112 20:54:50.376691 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.376882 kubelet[2741]: E1112 20:54:50.376866 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.376882 kubelet[2741]: W1112 20:54:50.376879 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.376953 kubelet[2741]: E1112 20:54:50.376898 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.377120 kubelet[2741]: E1112 20:54:50.377105 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.377120 kubelet[2741]: W1112 20:54:50.377117 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.377177 kubelet[2741]: E1112 20:54:50.377134 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.377347 kubelet[2741]: E1112 20:54:50.377333 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.377347 kubelet[2741]: W1112 20:54:50.377344 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.377396 kubelet[2741]: E1112 20:54:50.377361 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.377578 kubelet[2741]: E1112 20:54:50.377561 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.377615 kubelet[2741]: W1112 20:54:50.377582 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.377615 kubelet[2741]: E1112 20:54:50.377601 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.377902 kubelet[2741]: E1112 20:54:50.377878 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.377902 kubelet[2741]: W1112 20:54:50.377896 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.377971 kubelet[2741]: E1112 20:54:50.377930 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:50.378350 kubelet[2741]: E1112 20:54:50.378330 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:50.378350 kubelet[2741]: W1112 20:54:50.378342 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:50.378420 kubelet[2741]: E1112 20:54:50.378353 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.304923 kubelet[2741]: I1112 20:54:51.304865 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:54:51.305566 kubelet[2741]: E1112 20:54:51.305549 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:51.359879 kubelet[2741]: E1112 20:54:51.359835 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.359879 kubelet[2741]: W1112 20:54:51.359864 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.359879 kubelet[2741]: E1112 20:54:51.359893 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.360239 kubelet[2741]: E1112 20:54:51.360215 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.360239 kubelet[2741]: W1112 20:54:51.360228 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.360239 kubelet[2741]: E1112 20:54:51.360241 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.360533 kubelet[2741]: E1112 20:54:51.360508 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.360564 kubelet[2741]: W1112 20:54:51.360532 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.360585 kubelet[2741]: E1112 20:54:51.360560 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.360818 kubelet[2741]: E1112 20:54:51.360803 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.360818 kubelet[2741]: W1112 20:54:51.360817 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.360878 kubelet[2741]: E1112 20:54:51.360831 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.361075 kubelet[2741]: E1112 20:54:51.361061 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.361075 kubelet[2741]: W1112 20:54:51.361072 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.361135 kubelet[2741]: E1112 20:54:51.361085 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.361353 kubelet[2741]: E1112 20:54:51.361332 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.361353 kubelet[2741]: W1112 20:54:51.361346 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.361353 kubelet[2741]: E1112 20:54:51.361359 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.361599 kubelet[2741]: E1112 20:54:51.361576 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.361599 kubelet[2741]: W1112 20:54:51.361587 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.361599 kubelet[2741]: E1112 20:54:51.361597 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.361860 kubelet[2741]: E1112 20:54:51.361834 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.361860 kubelet[2741]: W1112 20:54:51.361848 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.361860 kubelet[2741]: E1112 20:54:51.361859 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.362130 kubelet[2741]: E1112 20:54:51.362100 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.362130 kubelet[2741]: W1112 20:54:51.362111 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.362130 kubelet[2741]: E1112 20:54:51.362120 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.362363 kubelet[2741]: E1112 20:54:51.362346 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.362363 kubelet[2741]: W1112 20:54:51.362357 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.362363 kubelet[2741]: E1112 20:54:51.362368 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.362621 kubelet[2741]: E1112 20:54:51.362603 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.362621 kubelet[2741]: W1112 20:54:51.362613 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.362621 kubelet[2741]: E1112 20:54:51.362624 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.362847 kubelet[2741]: E1112 20:54:51.362830 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.362847 kubelet[2741]: W1112 20:54:51.362841 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.362847 kubelet[2741]: E1112 20:54:51.362852 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.363107 kubelet[2741]: E1112 20:54:51.363089 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.363107 kubelet[2741]: W1112 20:54:51.363100 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.363107 kubelet[2741]: E1112 20:54:51.363110 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.363328 kubelet[2741]: E1112 20:54:51.363311 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.363328 kubelet[2741]: W1112 20:54:51.363321 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.363328 kubelet[2741]: E1112 20:54:51.363331 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.363551 kubelet[2741]: E1112 20:54:51.363534 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.363551 kubelet[2741]: W1112 20:54:51.363544 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.363551 kubelet[2741]: E1112 20:54:51.363554 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.380575 kubelet[2741]: E1112 20:54:51.380535 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.380575 kubelet[2741]: W1112 20:54:51.380556 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.380575 kubelet[2741]: E1112 20:54:51.380572 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.380891 kubelet[2741]: E1112 20:54:51.380856 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.380891 kubelet[2741]: W1112 20:54:51.380873 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.381010 kubelet[2741]: E1112 20:54:51.380895 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.381193 kubelet[2741]: E1112 20:54:51.381173 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.381193 kubelet[2741]: W1112 20:54:51.381188 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.381266 kubelet[2741]: E1112 20:54:51.381211 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.381451 kubelet[2741]: E1112 20:54:51.381436 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.381451 kubelet[2741]: W1112 20:54:51.381449 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.381515 kubelet[2741]: E1112 20:54:51.381471 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.381678 kubelet[2741]: E1112 20:54:51.381665 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.381678 kubelet[2741]: W1112 20:54:51.381676 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.381728 kubelet[2741]: E1112 20:54:51.381693 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.381979 kubelet[2741]: E1112 20:54:51.381964 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.381979 kubelet[2741]: W1112 20:54:51.381977 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.382042 kubelet[2741]: E1112 20:54:51.381997 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.382433 kubelet[2741]: E1112 20:54:51.382412 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.382433 kubelet[2741]: W1112 20:54:51.382427 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.382480 kubelet[2741]: E1112 20:54:51.382447 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.382766 kubelet[2741]: E1112 20:54:51.382749 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.382766 kubelet[2741]: W1112 20:54:51.382763 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.382829 kubelet[2741]: E1112 20:54:51.382783 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.383040 kubelet[2741]: E1112 20:54:51.383026 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.383040 kubelet[2741]: W1112 20:54:51.383036 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.383098 kubelet[2741]: E1112 20:54:51.383052 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.383299 kubelet[2741]: E1112 20:54:51.383278 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.383299 kubelet[2741]: W1112 20:54:51.383289 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.383361 kubelet[2741]: E1112 20:54:51.383304 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.383581 kubelet[2741]: E1112 20:54:51.383564 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.383581 kubelet[2741]: W1112 20:54:51.383580 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.383648 kubelet[2741]: E1112 20:54:51.383604 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.383844 kubelet[2741]: E1112 20:54:51.383831 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.383877 kubelet[2741]: W1112 20:54:51.383844 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.383877 kubelet[2741]: E1112 20:54:51.383860 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.384150 kubelet[2741]: E1112 20:54:51.384135 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.384192 kubelet[2741]: W1112 20:54:51.384149 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.384192 kubelet[2741]: E1112 20:54:51.384182 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.384477 kubelet[2741]: E1112 20:54:51.384462 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.384513 kubelet[2741]: W1112 20:54:51.384476 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.384513 kubelet[2741]: E1112 20:54:51.384492 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.384731 kubelet[2741]: E1112 20:54:51.384715 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.384731 kubelet[2741]: W1112 20:54:51.384730 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.384797 kubelet[2741]: E1112 20:54:51.384749 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.385029 kubelet[2741]: E1112 20:54:51.385014 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.385058 kubelet[2741]: W1112 20:54:51.385028 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.385058 kubelet[2741]: E1112 20:54:51.385047 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.385302 kubelet[2741]: E1112 20:54:51.385289 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.385302 kubelet[2741]: W1112 20:54:51.385301 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.385347 kubelet[2741]: E1112 20:54:51.385315 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:51.385712 kubelet[2741]: E1112 20:54:51.385695 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:54:51.385712 kubelet[2741]: W1112 20:54:51.385705 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:54:51.385712 kubelet[2741]: E1112 20:54:51.385715 2741 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:54:52.229207 kubelet[2741]: E1112 20:54:52.229146 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:54:52.520257 containerd[1570]: time="2024-11-12T20:54:52.520064879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:52.538579 containerd[1570]: time="2024-11-12T20:54:52.538479819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:54:52.542434 containerd[1570]: time="2024-11-12T20:54:52.542377505Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:52.549827 containerd[1570]: time="2024-11-12T20:54:52.549754875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:52.550562 containerd[1570]: time="2024-11-12T20:54:52.550517865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 2.533005968s" Nov 12 20:54:52.550610 containerd[1570]: time="2024-11-12T20:54:52.550560445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:54:52.552278 containerd[1570]: time="2024-11-12T20:54:52.552244301Z" level=info msg="CreateContainer within sandbox \"14b22e1448a7088e9796bad48b115a539e74545440272f1a7bcbc9f1a851c16d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:54:52.573864 containerd[1570]: time="2024-11-12T20:54:52.573803565Z" level=info msg="CreateContainer within sandbox \"14b22e1448a7088e9796bad48b115a539e74545440272f1a7bcbc9f1a851c16d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89021fe2bcdeced452f4c4c947e44e32ca7460bce022cbace75988830c5b955b\"" Nov 12 20:54:52.574448 containerd[1570]: time="2024-11-12T20:54:52.574407348Z" level=info msg="StartContainer for \"89021fe2bcdeced452f4c4c947e44e32ca7460bce022cbace75988830c5b955b\"" Nov 12 20:54:52.638750 containerd[1570]: time="2024-11-12T20:54:52.638669030Z" level=info msg="StartContainer for \"89021fe2bcdeced452f4c4c947e44e32ca7460bce022cbace75988830c5b955b\" returns successfully" Nov 12 20:54:52.677534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89021fe2bcdeced452f4c4c947e44e32ca7460bce022cbace75988830c5b955b-rootfs.mount: Deactivated successfully. Nov 12 20:54:52.767847 containerd[1570]: time="2024-11-12T20:54:52.767738891Z" level=info msg="shim disconnected" id=89021fe2bcdeced452f4c4c947e44e32ca7460bce022cbace75988830c5b955b namespace=k8s.io Nov 12 20:54:52.767847 containerd[1570]: time="2024-11-12T20:54:52.767833930Z" level=warning msg="cleaning up after shim disconnected" id=89021fe2bcdeced452f4c4c947e44e32ca7460bce022cbace75988830c5b955b namespace=k8s.io Nov 12 20:54:52.767847 containerd[1570]: time="2024-11-12T20:54:52.767847074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:54:53.310651 kubelet[2741]: E1112 20:54:53.310609 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:53.311705 containerd[1570]: time="2024-11-12T20:54:53.311644372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:54:54.228596 kubelet[2741]: E1112 20:54:54.228525 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:54:56.228740 kubelet[2741]: E1112 20:54:56.228688 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:54:57.781886 containerd[1570]: time="2024-11-12T20:54:57.781816026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:57.782705 containerd[1570]: time="2024-11-12T20:54:57.782658235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:54:57.783843 containerd[1570]: time="2024-11-12T20:54:57.783798653Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:57.786310 containerd[1570]: time="2024-11-12T20:54:57.786276660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:57.786920 containerd[1570]: time="2024-11-12T20:54:57.786874050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 4.475189542s" Nov 12 20:54:57.786955 containerd[1570]: time="2024-11-12T20:54:57.786923923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:54:57.788883 containerd[1570]: time="2024-11-12T20:54:57.788858551Z" level=info msg="CreateContainer within sandbox \"14b22e1448a7088e9796bad48b115a539e74545440272f1a7bcbc9f1a851c16d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:54:57.923000 containerd[1570]: time="2024-11-12T20:54:57.922935499Z" level=info msg="CreateContainer within sandbox \"14b22e1448a7088e9796bad48b115a539e74545440272f1a7bcbc9f1a851c16d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b33b82a9109cef0b1e2b6b0507885ac965fe0067c6f55bc57a7495cd91cc27e3\"" Nov 12 20:54:57.923764 containerd[1570]: time="2024-11-12T20:54:57.923732113Z" level=info msg="StartContainer for \"b33b82a9109cef0b1e2b6b0507885ac965fe0067c6f55bc57a7495cd91cc27e3\"" Nov 12 20:54:58.228854 kubelet[2741]: E1112 20:54:58.228731 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:54:58.236208 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:50790.service - OpenSSH per-connection server daemon (10.0.0.1:50790). Nov 12 20:54:58.530800 kubelet[2741]: I1112 20:54:58.480833 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:54:58.530800 kubelet[2741]: E1112 20:54:58.481465 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:58.900864 containerd[1570]: time="2024-11-12T20:54:58.900692340Z" level=info msg="StartContainer for \"b33b82a9109cef0b1e2b6b0507885ac965fe0067c6f55bc57a7495cd91cc27e3\" returns successfully" Nov 12 20:54:58.902262 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 50790 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:54:58.903693 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:58.904528 kubelet[2741]: E1112 20:54:58.904487 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:58.920945 systemd-logind[1552]: New session 8 of user core. Nov 12 20:54:58.930260 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:54:59.503225 sshd[3480]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:59.507969 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:50790.service: Deactivated successfully. Nov 12 20:54:59.512495 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:54:59.515097 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:54:59.516770 systemd-logind[1552]: Removed session 8. Nov 12 20:54:59.905799 kubelet[2741]: E1112 20:54:59.905749 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:54:59.906377 kubelet[2741]: E1112 20:54:59.905894 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:00.228619 kubelet[2741]: E1112 20:55:00.228465 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:55:02.228945 kubelet[2741]: E1112 20:55:02.228875 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:55:03.019572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b33b82a9109cef0b1e2b6b0507885ac965fe0067c6f55bc57a7495cd91cc27e3-rootfs.mount: Deactivated successfully. Nov 12 20:55:03.080453 kubelet[2741]: I1112 20:55:03.080379 2741 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:55:03.082275 containerd[1570]: time="2024-11-12T20:55:03.081859461Z" level=info msg="shim disconnected" id=b33b82a9109cef0b1e2b6b0507885ac965fe0067c6f55bc57a7495cd91cc27e3 namespace=k8s.io Nov 12 20:55:03.082275 containerd[1570]: time="2024-11-12T20:55:03.081983604Z" level=warning msg="cleaning up after shim disconnected" id=b33b82a9109cef0b1e2b6b0507885ac965fe0067c6f55bc57a7495cd91cc27e3 namespace=k8s.io Nov 12 20:55:03.082275 containerd[1570]: time="2024-11-12T20:55:03.081994935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:03.109438 kubelet[2741]: I1112 20:55:03.109188 2741 topology_manager.go:215] "Topology Admit Handler" podUID="18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4" podNamespace="kube-system" podName="coredns-76f75df574-mxktf" Nov 12 20:55:03.115472 kubelet[2741]: I1112 20:55:03.115370 2741 topology_manager.go:215] "Topology Admit Handler" podUID="d3e73151-088a-437b-9a45-b13477085c0c" podNamespace="kube-system" podName="coredns-76f75df574-fmv8c" Nov 12 20:55:03.115730 kubelet[2741]: I1112 20:55:03.115651 2741 topology_manager.go:215] "Topology Admit Handler" podUID="c7f0cde4-77ad-4783-bce2-a9599e5f533e" podNamespace="calico-apiserver" podName="calico-apiserver-7669974dd4-n7xsq" Nov 12 20:55:03.116452 kubelet[2741]: I1112 20:55:03.115947 2741 topology_manager.go:215] "Topology Admit Handler" podUID="d030104e-d3be-4689-840e-40e7cceed6f7" podNamespace="calico-apiserver" podName="calico-apiserver-7669974dd4-rq2l5" Nov 12 20:55:03.117941 kubelet[2741]: I1112 20:55:03.117727 2741 topology_manager.go:215] "Topology Admit Handler" podUID="2cd83f6e-605c-4278-926b-b78b4419f8ae" podNamespace="calico-system" podName="calico-kube-controllers-6f54cd56f9-7gnfq" Nov 12 20:55:03.154194 kubelet[2741]: I1112 20:55:03.154023 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d030104e-d3be-4689-840e-40e7cceed6f7-calico-apiserver-certs\") pod \"calico-apiserver-7669974dd4-rq2l5\" (UID: \"d030104e-d3be-4689-840e-40e7cceed6f7\") " pod="calico-apiserver/calico-apiserver-7669974dd4-rq2l5" Nov 12 20:55:03.154194 kubelet[2741]: I1112 20:55:03.154101 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3e73151-088a-437b-9a45-b13477085c0c-config-volume\") pod \"coredns-76f75df574-fmv8c\" (UID: \"d3e73151-088a-437b-9a45-b13477085c0c\") " pod="kube-system/coredns-76f75df574-fmv8c" Nov 12 20:55:03.154194 kubelet[2741]: I1112 20:55:03.154121 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g69n\" (UniqueName: \"kubernetes.io/projected/c7f0cde4-77ad-4783-bce2-a9599e5f533e-kube-api-access-5g69n\") pod \"calico-apiserver-7669974dd4-n7xsq\" (UID: \"c7f0cde4-77ad-4783-bce2-a9599e5f533e\") " pod="calico-apiserver/calico-apiserver-7669974dd4-n7xsq" Nov 12 20:55:03.154480 kubelet[2741]: I1112 20:55:03.154222 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwllx\" (UniqueName: \"kubernetes.io/projected/2cd83f6e-605c-4278-926b-b78b4419f8ae-kube-api-access-cwllx\") pod \"calico-kube-controllers-6f54cd56f9-7gnfq\" (UID: \"2cd83f6e-605c-4278-926b-b78b4419f8ae\") " pod="calico-system/calico-kube-controllers-6f54cd56f9-7gnfq" Nov 12 20:55:03.154480 kubelet[2741]: I1112 20:55:03.154245 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4-config-volume\") pod \"coredns-76f75df574-mxktf\" (UID: \"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4\") " pod="kube-system/coredns-76f75df574-mxktf" Nov 12 20:55:03.154480 kubelet[2741]: I1112 20:55:03.154265 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zh5\" (UniqueName: \"kubernetes.io/projected/d3e73151-088a-437b-9a45-b13477085c0c-kube-api-access-g6zh5\") pod \"coredns-76f75df574-fmv8c\" (UID: \"d3e73151-088a-437b-9a45-b13477085c0c\") " pod="kube-system/coredns-76f75df574-fmv8c" Nov 12 20:55:03.154480 kubelet[2741]: I1112 20:55:03.154292 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f94q8\" (UniqueName: \"kubernetes.io/projected/d030104e-d3be-4689-840e-40e7cceed6f7-kube-api-access-f94q8\") pod \"calico-apiserver-7669974dd4-rq2l5\" (UID: \"d030104e-d3be-4689-840e-40e7cceed6f7\") " pod="calico-apiserver/calico-apiserver-7669974dd4-rq2l5" Nov 12 20:55:03.154480 kubelet[2741]: I1112 20:55:03.154319 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7f0cde4-77ad-4783-bce2-a9599e5f533e-calico-apiserver-certs\") pod \"calico-apiserver-7669974dd4-n7xsq\" (UID: \"c7f0cde4-77ad-4783-bce2-a9599e5f533e\") " pod="calico-apiserver/calico-apiserver-7669974dd4-n7xsq" Nov 12 20:55:03.154655 kubelet[2741]: I1112 20:55:03.154348 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jm28\" (UniqueName: \"kubernetes.io/projected/18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4-kube-api-access-6jm28\") pod \"coredns-76f75df574-mxktf\" (UID: \"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4\") " pod="kube-system/coredns-76f75df574-mxktf" Nov 12 20:55:03.154655 kubelet[2741]: I1112 20:55:03.154409 2741 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cd83f6e-605c-4278-926b-b78b4419f8ae-tigera-ca-bundle\") pod \"calico-kube-controllers-6f54cd56f9-7gnfq\" (UID: \"2cd83f6e-605c-4278-926b-b78b4419f8ae\") " pod="calico-system/calico-kube-controllers-6f54cd56f9-7gnfq" Nov 12 20:55:03.417476 kubelet[2741]: E1112 20:55:03.417423 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:03.418177 containerd[1570]: time="2024-11-12T20:55:03.418120504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mxktf,Uid:18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:03.421843 containerd[1570]: time="2024-11-12T20:55:03.421795508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-n7xsq,Uid:c7f0cde4-77ad-4783-bce2-a9599e5f533e,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:55:03.426592 containerd[1570]: time="2024-11-12T20:55:03.426551970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-rq2l5,Uid:d030104e-d3be-4689-840e-40e7cceed6f7,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:55:03.429576 containerd[1570]: time="2024-11-12T20:55:03.429508034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f54cd56f9-7gnfq,Uid:2cd83f6e-605c-4278-926b-b78b4419f8ae,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:03.430667 kubelet[2741]: E1112 20:55:03.430636 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:03.430940 containerd[1570]: time="2024-11-12T20:55:03.430885357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmv8c,Uid:d3e73151-088a-437b-9a45-b13477085c0c,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:03.679576 containerd[1570]: time="2024-11-12T20:55:03.679418661Z" level=error msg="Failed to destroy network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.681038 containerd[1570]: time="2024-11-12T20:55:03.679422287Z" level=error msg="Failed to destroy network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.681507 containerd[1570]: time="2024-11-12T20:55:03.681458076Z" level=error msg="encountered an error cleaning up failed sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.681680 containerd[1570]: time="2024-11-12T20:55:03.681539108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-n7xsq,Uid:c7f0cde4-77ad-4783-bce2-a9599e5f533e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683247 containerd[1570]: time="2024-11-12T20:55:03.681842066Z" level=error msg="encountered an error cleaning up failed sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683247 containerd[1570]: time="2024-11-12T20:55:03.681890106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-rq2l5,Uid:d030104e-d3be-4689-840e-40e7cceed6f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683247 containerd[1570]: time="2024-11-12T20:55:03.682195980Z" level=error msg="Failed to destroy network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683247 containerd[1570]: time="2024-11-12T20:55:03.682627069Z" level=error msg="encountered an error cleaning up failed sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683247 containerd[1570]: time="2024-11-12T20:55:03.682665261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmv8c,Uid:d3e73151-088a-437b-9a45-b13477085c0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683482 kubelet[2741]: E1112 20:55:03.681848 2741 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683482 kubelet[2741]: E1112 20:55:03.681955 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7669974dd4-n7xsq" Nov 12 20:55:03.683482 kubelet[2741]: E1112 20:55:03.681986 2741 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7669974dd4-n7xsq" Nov 12 20:55:03.683482 kubelet[2741]: E1112 20:55:03.682441 2741 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683717 kubelet[2741]: E1112 20:55:03.682476 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7669974dd4-rq2l5" Nov 12 20:55:03.683717 kubelet[2741]: E1112 20:55:03.682499 2741 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7669974dd4-rq2l5" Nov 12 20:55:03.683717 kubelet[2741]: E1112 20:55:03.682549 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7669974dd4-rq2l5_calico-apiserver(d030104e-d3be-4689-840e-40e7cceed6f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7669974dd4-rq2l5_calico-apiserver(d030104e-d3be-4689-840e-40e7cceed6f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7669974dd4-rq2l5" podUID="d030104e-d3be-4689-840e-40e7cceed6f7" Nov 12 20:55:03.683862 containerd[1570]: time="2024-11-12T20:55:03.683519824Z" level=error msg="Failed to destroy network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683862 containerd[1570]: time="2024-11-12T20:55:03.683703497Z" level=error msg="Failed to destroy network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683975 kubelet[2741]: E1112 20:55:03.682961 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7669974dd4-n7xsq_calico-apiserver(c7f0cde4-77ad-4783-bce2-a9599e5f533e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7669974dd4-n7xsq_calico-apiserver(c7f0cde4-77ad-4783-bce2-a9599e5f533e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7669974dd4-n7xsq" podUID="c7f0cde4-77ad-4783-bce2-a9599e5f533e" Nov 12 20:55:03.683975 kubelet[2741]: E1112 20:55:03.683160 2741 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.683975 kubelet[2741]: E1112 20:55:03.683202 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fmv8c" Nov 12 20:55:03.684126 kubelet[2741]: E1112 20:55:03.683234 2741 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fmv8c" Nov 12 20:55:03.684126 kubelet[2741]: E1112 20:55:03.683273 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fmv8c_kube-system(d3e73151-088a-437b-9a45-b13477085c0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fmv8c_kube-system(d3e73151-088a-437b-9a45-b13477085c0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fmv8c" podUID="d3e73151-088a-437b-9a45-b13477085c0c" Nov 12 20:55:03.684239 containerd[1570]: time="2024-11-12T20:55:03.684005534Z" level=error msg="encountered an error cleaning up failed sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.684239 containerd[1570]: time="2024-11-12T20:55:03.684064455Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mxktf,Uid:18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.684239 containerd[1570]: time="2024-11-12T20:55:03.684171836Z" level=error msg="encountered an error cleaning up failed sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.684239 containerd[1570]: time="2024-11-12T20:55:03.684210138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f54cd56f9-7gnfq,Uid:2cd83f6e-605c-4278-926b-b78b4419f8ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.684407 kubelet[2741]: E1112 20:55:03.684359 2741 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.684407 kubelet[2741]: E1112 20:55:03.684390 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f54cd56f9-7gnfq" Nov 12 20:55:03.684407 kubelet[2741]: E1112 20:55:03.684406 2741 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f54cd56f9-7gnfq" Nov 12 20:55:03.684515 kubelet[2741]: E1112 20:55:03.684445 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f54cd56f9-7gnfq_calico-system(2cd83f6e-605c-4278-926b-b78b4419f8ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f54cd56f9-7gnfq_calico-system(2cd83f6e-605c-4278-926b-b78b4419f8ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f54cd56f9-7gnfq" podUID="2cd83f6e-605c-4278-926b-b78b4419f8ae" Nov 12 20:55:03.684515 kubelet[2741]: E1112 20:55:03.684488 2741 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.684515 kubelet[2741]: E1112 20:55:03.684510 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mxktf" Nov 12 20:55:03.684638 kubelet[2741]: E1112 20:55:03.684526 2741 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mxktf" Nov 12 20:55:03.684638 kubelet[2741]: E1112 20:55:03.684573 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mxktf_kube-system(18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mxktf_kube-system(18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mxktf" podUID="18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4" Nov 12 20:55:03.914248 kubelet[2741]: I1112 20:55:03.914213 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:03.915087 containerd[1570]: time="2024-11-12T20:55:03.915013365Z" level=info msg="StopPodSandbox for \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\"" Nov 12 20:55:03.915324 containerd[1570]: time="2024-11-12T20:55:03.915285846Z" level=info msg="Ensure that sandbox 9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace in task-service has been cleanup successfully" Nov 12 20:55:03.917749 kubelet[2741]: E1112 20:55:03.917711 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:03.918709 containerd[1570]: time="2024-11-12T20:55:03.918623226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:55:03.919695 kubelet[2741]: I1112 20:55:03.919330 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:03.920696 containerd[1570]: time="2024-11-12T20:55:03.920595365Z" level=info msg="StopPodSandbox for \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\"" Nov 12 20:55:03.920808 containerd[1570]: time="2024-11-12T20:55:03.920774541Z" level=info msg="Ensure that sandbox eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10 in task-service has been cleanup successfully" Nov 12 20:55:03.921551 kubelet[2741]: I1112 20:55:03.921522 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:03.923842 containerd[1570]: time="2024-11-12T20:55:03.923133536Z" level=info msg="StopPodSandbox for \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\"" Nov 12 20:55:03.923842 containerd[1570]: time="2024-11-12T20:55:03.923362736Z" level=info msg="Ensure that sandbox e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c in task-service has been cleanup successfully" Nov 12 20:55:03.925606 kubelet[2741]: I1112 20:55:03.925560 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:03.928110 containerd[1570]: time="2024-11-12T20:55:03.927715109Z" level=info msg="StopPodSandbox for \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\"" Nov 12 20:55:03.928110 containerd[1570]: time="2024-11-12T20:55:03.927935633Z" level=info msg="Ensure that sandbox 4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc in task-service has been cleanup successfully" Nov 12 20:55:03.928676 kubelet[2741]: I1112 20:55:03.928317 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:03.929222 containerd[1570]: time="2024-11-12T20:55:03.929055544Z" level=info msg="StopPodSandbox for \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\"" Nov 12 20:55:03.929316 containerd[1570]: time="2024-11-12T20:55:03.929288171Z" level=info msg="Ensure that sandbox faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980 in task-service has been cleanup successfully" Nov 12 20:55:03.970993 containerd[1570]: time="2024-11-12T20:55:03.970784344Z" level=error msg="StopPodSandbox for \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\" failed" error="failed to destroy network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.971526 kubelet[2741]: E1112 20:55:03.971382 2741 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:03.971526 kubelet[2741]: E1112 20:55:03.971482 2741 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace"} Nov 12 20:55:03.971526 kubelet[2741]: E1112 20:55:03.971530 2741 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:03.971716 kubelet[2741]: E1112 20:55:03.971572 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mxktf" podUID="18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4" Nov 12 20:55:03.990516 containerd[1570]: time="2024-11-12T20:55:03.989242676Z" level=error msg="StopPodSandbox for \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\" failed" error="failed to destroy network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.990516 containerd[1570]: time="2024-11-12T20:55:03.989445196Z" level=error msg="StopPodSandbox for \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\" failed" error="failed to destroy network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.990738 kubelet[2741]: E1112 20:55:03.989579 2741 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:03.990738 kubelet[2741]: E1112 20:55:03.989634 2741 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc"} Nov 12 20:55:03.990738 kubelet[2741]: E1112 20:55:03.989676 2741 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d030104e-d3be-4689-840e-40e7cceed6f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:03.990738 kubelet[2741]: E1112 20:55:03.989721 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d030104e-d3be-4689-840e-40e7cceed6f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7669974dd4-rq2l5" podUID="d030104e-d3be-4689-840e-40e7cceed6f7" Nov 12 20:55:03.991061 containerd[1570]: time="2024-11-12T20:55:03.990614961Z" level=error msg="StopPodSandbox for \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\" failed" error="failed to destroy network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.991098 kubelet[2741]: E1112 20:55:03.989730 2741 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:03.991098 kubelet[2741]: E1112 20:55:03.989788 2741 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980"} Nov 12 20:55:03.991098 kubelet[2741]: E1112 20:55:03.989838 2741 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7f0cde4-77ad-4783-bce2-a9599e5f533e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:03.991098 kubelet[2741]: E1112 20:55:03.989879 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7f0cde4-77ad-4783-bce2-a9599e5f533e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7669974dd4-n7xsq" podUID="c7f0cde4-77ad-4783-bce2-a9599e5f533e" Nov 12 20:55:03.991244 kubelet[2741]: E1112 20:55:03.990865 2741 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:03.991244 kubelet[2741]: E1112 20:55:03.990892 2741 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c"} Nov 12 20:55:03.991244 kubelet[2741]: E1112 20:55:03.990950 2741 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cd83f6e-605c-4278-926b-b78b4419f8ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:03.991244 kubelet[2741]: E1112 20:55:03.990981 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cd83f6e-605c-4278-926b-b78b4419f8ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f54cd56f9-7gnfq" podUID="2cd83f6e-605c-4278-926b-b78b4419f8ae" Nov 12 20:55:03.992638 containerd[1570]: time="2024-11-12T20:55:03.992584836Z" level=error msg="StopPodSandbox for \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\" failed" error="failed to destroy network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:03.992805 kubelet[2741]: E1112 20:55:03.992777 2741 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:03.992870 kubelet[2741]: E1112 20:55:03.992809 2741 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10"} Nov 12 20:55:03.992870 kubelet[2741]: E1112 20:55:03.992847 2741 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3e73151-088a-437b-9a45-b13477085c0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:03.993033 kubelet[2741]: E1112 20:55:03.992898 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3e73151-088a-437b-9a45-b13477085c0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fmv8c" podUID="d3e73151-088a-437b-9a45-b13477085c0c" Nov 12 20:55:04.233372 containerd[1570]: time="2024-11-12T20:55:04.233247440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll6wf,Uid:3131bc61-5520-4f07-bd62-766f60d48de0,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:04.519232 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:50796.service - OpenSSH per-connection server daemon (10.0.0.1:50796). Nov 12 20:55:04.593998 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 50796 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:04.596298 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:04.601142 systemd-logind[1552]: New session 9 of user core. Nov 12 20:55:04.609273 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:55:04.744060 sshd[3833]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:04.749209 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:55:04.750245 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:50796.service: Deactivated successfully. Nov 12 20:55:04.754823 containerd[1570]: time="2024-11-12T20:55:04.754759635Z" level=error msg="Failed to destroy network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:04.755323 containerd[1570]: time="2024-11-12T20:55:04.755266216Z" level=error msg="encountered an error cleaning up failed sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:04.755441 containerd[1570]: time="2024-11-12T20:55:04.755330206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll6wf,Uid:3131bc61-5520-4f07-bd62-766f60d48de0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:04.755674 kubelet[2741]: E1112 20:55:04.755647 2741 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:04.756082 kubelet[2741]: E1112 20:55:04.755744 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll6wf" Nov 12 20:55:04.756082 kubelet[2741]: E1112 20:55:04.755791 2741 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll6wf" Nov 12 20:55:04.757471 kubelet[2741]: E1112 20:55:04.756319 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ll6wf_calico-system(3131bc61-5520-4f07-bd62-766f60d48de0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ll6wf_calico-system(3131bc61-5520-4f07-bd62-766f60d48de0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:55:04.758318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3-shm.mount: Deactivated successfully. Nov 12 20:55:04.759393 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:55:04.760335 systemd-logind[1552]: Removed session 9. Nov 12 20:55:04.931424 kubelet[2741]: I1112 20:55:04.931380 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:04.932120 containerd[1570]: time="2024-11-12T20:55:04.932071841Z" level=info msg="StopPodSandbox for \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\"" Nov 12 20:55:04.932312 containerd[1570]: time="2024-11-12T20:55:04.932275624Z" level=info msg="Ensure that sandbox 161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3 in task-service has been cleanup successfully" Nov 12 20:55:04.962620 containerd[1570]: time="2024-11-12T20:55:04.962549671Z" level=error msg="StopPodSandbox for \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\" failed" error="failed to destroy network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:04.962930 kubelet[2741]: E1112 20:55:04.962870 2741 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:04.963023 kubelet[2741]: E1112 20:55:04.962951 2741 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3"} Nov 12 20:55:04.963023 kubelet[2741]: E1112 20:55:04.962998 2741 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3131bc61-5520-4f07-bd62-766f60d48de0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:04.963142 kubelet[2741]: E1112 20:55:04.963053 2741 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3131bc61-5520-4f07-bd62-766f60d48de0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll6wf" podUID="3131bc61-5520-4f07-bd62-766f60d48de0" Nov 12 20:55:09.758179 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:50638.service - OpenSSH per-connection server daemon (10.0.0.1:50638). Nov 12 20:55:10.068536 sshd[3916]: Accepted publickey for core from 10.0.0.1 port 50638 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:10.071005 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:10.078685 systemd-logind[1552]: New session 10 of user core. Nov 12 20:55:10.084405 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:55:10.243744 sshd[3916]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:10.248502 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:50638.service: Deactivated successfully. Nov 12 20:55:10.251795 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:55:10.253451 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:55:10.254539 systemd-logind[1552]: Removed session 10. Nov 12 20:55:10.308778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404391388.mount: Deactivated successfully. Nov 12 20:55:11.537999 containerd[1570]: time="2024-11-12T20:55:11.537886942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:11.576359 containerd[1570]: time="2024-11-12T20:55:11.576285904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:55:11.621978 containerd[1570]: time="2024-11-12T20:55:11.621894031Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:11.661448 containerd[1570]: time="2024-11-12T20:55:11.661382486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:11.662167 containerd[1570]: time="2024-11-12T20:55:11.662125691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 7.743453203s" Nov 12 20:55:11.662235 containerd[1570]: time="2024-11-12T20:55:11.662165936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:55:11.679306 containerd[1570]: time="2024-11-12T20:55:11.679227897Z" level=info msg="CreateContainer within sandbox \"14b22e1448a7088e9796bad48b115a539e74545440272f1a7bcbc9f1a851c16d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:55:12.131777 containerd[1570]: time="2024-11-12T20:55:12.131674453Z" level=info msg="CreateContainer within sandbox \"14b22e1448a7088e9796bad48b115a539e74545440272f1a7bcbc9f1a851c16d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e98541b1ccdd00b912ecc87318d274f2f1093b7466049ccaffea2b17ab71df30\"" Nov 12 20:55:12.132420 containerd[1570]: time="2024-11-12T20:55:12.132394122Z" level=info msg="StartContainer for \"e98541b1ccdd00b912ecc87318d274f2f1093b7466049ccaffea2b17ab71df30\"" Nov 12 20:55:12.763728 containerd[1570]: time="2024-11-12T20:55:12.763584046Z" level=info msg="StartContainer for \"e98541b1ccdd00b912ecc87318d274f2f1093b7466049ccaffea2b17ab71df30\" returns successfully" Nov 12 20:55:12.770572 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:55:12.770734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:55:13.026844 kubelet[2741]: E1112 20:55:13.026784 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:13.202857 kubelet[2741]: I1112 20:55:13.202785 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-95mnj" podStartSLOduration=2.143719461 podStartE2EDuration="27.202741791s" podCreationTimestamp="2024-11-12 20:54:46 +0000 UTC" firstStartedPulling="2024-11-12 20:54:46.603425846 +0000 UTC m=+23.493931501" lastFinishedPulling="2024-11-12 20:55:11.662448176 +0000 UTC m=+48.552953831" observedRunningTime="2024-11-12 20:55:13.20249113 +0000 UTC m=+50.092996805" watchObservedRunningTime="2024-11-12 20:55:13.202741791 +0000 UTC m=+50.093247446" Nov 12 20:55:14.028920 kubelet[2741]: E1112 20:55:14.028889 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:14.694997 kernel: bpftool[4167]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:55:14.959519 systemd-networkd[1256]: vxlan.calico: Link UP Nov 12 20:55:14.959534 systemd-networkd[1256]: vxlan.calico: Gained carrier Nov 12 20:55:15.230558 containerd[1570]: time="2024-11-12T20:55:15.230105190Z" level=info msg="StopPodSandbox for \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\"" Nov 12 20:55:15.253564 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:50646.service - OpenSSH per-connection server daemon (10.0.0.1:50646). Nov 12 20:55:15.296825 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 50646 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:15.298664 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:15.305994 systemd-logind[1552]: New session 11 of user core. Nov 12 20:55:15.315532 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:55:15.475192 sshd[4234]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:15.479287 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:50646.service: Deactivated successfully. Nov 12 20:55:15.484318 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:55:15.485673 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:55:15.487256 systemd-logind[1552]: Removed session 11. Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.447 [INFO][4228] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.448 [INFO][4228] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" iface="eth0" netns="/var/run/netns/cni-ef2d75c7-1f77-ea36-4c51-a52986d1ce99" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.448 [INFO][4228] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" iface="eth0" netns="/var/run/netns/cni-ef2d75c7-1f77-ea36-4c51-a52986d1ce99" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.449 [INFO][4228] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" iface="eth0" netns="/var/run/netns/cni-ef2d75c7-1f77-ea36-4c51-a52986d1ce99" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.449 [INFO][4228] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.449 [INFO][4228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.511 [INFO][4274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.512 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.512 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.531 [WARNING][4274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.531 [INFO][4274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.532 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:15.539095 containerd[1570]: 2024-11-12 20:55:15.536 [INFO][4228] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:15.539578 containerd[1570]: time="2024-11-12T20:55:15.539328802Z" level=info msg="TearDown network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\" successfully" Nov 12 20:55:15.539578 containerd[1570]: time="2024-11-12T20:55:15.539371402Z" level=info msg="StopPodSandbox for \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\" returns successfully" Nov 12 20:55:15.541033 containerd[1570]: time="2024-11-12T20:55:15.541004045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f54cd56f9-7gnfq,Uid:2cd83f6e-605c-4278-926b-b78b4419f8ae,Namespace:calico-system,Attempt:1,}" Nov 12 20:55:15.543472 systemd[1]: run-netns-cni\x2def2d75c7\x2d1f77\x2dea36\x2d4c51\x2da52986d1ce99.mount: Deactivated successfully. Nov 12 20:55:16.017608 systemd-networkd[1256]: calic30de1e56b3: Link UP Nov 12 20:55:16.018417 systemd-networkd[1256]: calic30de1e56b3: Gained carrier Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.912 [INFO][4286] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0 calico-kube-controllers-6f54cd56f9- calico-system 2cd83f6e-605c-4278-926b-b78b4419f8ae 898 0 2024-11-12 20:54:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f54cd56f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f54cd56f9-7gnfq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic30de1e56b3 [] []}} ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.913 [INFO][4286] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.952 [INFO][4298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" HandleID="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.971 [INFO][4298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" HandleID="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f54cd56f9-7gnfq", "timestamp":"2024-11-12 20:55:15.952839074 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.971 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.971 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.971 [INFO][4298] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.974 [INFO][4298] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.982 [INFO][4298] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.989 [INFO][4298] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.993 [INFO][4298] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.995 [INFO][4298] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.995 [INFO][4298] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:15.999 [INFO][4298] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379 Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:16.004 [INFO][4298] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:16.011 [INFO][4298] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:16.011 [INFO][4298] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" host="localhost" Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:16.011 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:16.037180 containerd[1570]: 2024-11-12 20:55:16.011 [INFO][4298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" HandleID="k8s-pod-network.14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:16.037969 containerd[1570]: 2024-11-12 20:55:16.015 [INFO][4286] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0", GenerateName:"calico-kube-controllers-6f54cd56f9-", Namespace:"calico-system", SelfLink:"", UID:"2cd83f6e-605c-4278-926b-b78b4419f8ae", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f54cd56f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f54cd56f9-7gnfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic30de1e56b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:16.037969 containerd[1570]: 2024-11-12 20:55:16.015 [INFO][4286] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:16.037969 containerd[1570]: 2024-11-12 20:55:16.015 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic30de1e56b3 ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:16.037969 containerd[1570]: 2024-11-12 20:55:16.018 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:16.037969 containerd[1570]: 2024-11-12 20:55:16.018 [INFO][4286] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0", GenerateName:"calico-kube-controllers-6f54cd56f9-", Namespace:"calico-system", SelfLink:"", UID:"2cd83f6e-605c-4278-926b-b78b4419f8ae", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f54cd56f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379", Pod:"calico-kube-controllers-6f54cd56f9-7gnfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic30de1e56b3", MAC:"62:99:5c:dd:ef:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:16.037969 containerd[1570]: 2024-11-12 20:55:16.031 [INFO][4286] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379" Namespace="calico-system" Pod="calico-kube-controllers-6f54cd56f9-7gnfq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:16.075640 containerd[1570]: time="2024-11-12T20:55:16.075386641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:16.075640 containerd[1570]: time="2024-11-12T20:55:16.075465288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:16.075640 containerd[1570]: time="2024-11-12T20:55:16.075478072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:16.075640 containerd[1570]: time="2024-11-12T20:55:16.075626671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:16.116355 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:55:16.153646 containerd[1570]: time="2024-11-12T20:55:16.153588382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f54cd56f9-7gnfq,Uid:2cd83f6e-605c-4278-926b-b78b4419f8ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379\"" Nov 12 20:55:16.155542 containerd[1570]: time="2024-11-12T20:55:16.155497564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:55:16.229752 containerd[1570]: time="2024-11-12T20:55:16.229655257Z" level=info msg="StopPodSandbox for \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\"" Nov 12 20:55:16.544037 systemd[1]: run-containerd-runc-k8s.io-14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379-runc.1vUL3y.mount: Deactivated successfully. Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.541 [INFO][4379] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.541 [INFO][4379] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" iface="eth0" netns="/var/run/netns/cni-1b15de4b-2a6f-c09b-ade4-a7c8229b15e1" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.543 [INFO][4379] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" iface="eth0" netns="/var/run/netns/cni-1b15de4b-2a6f-c09b-ade4-a7c8229b15e1" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.543 [INFO][4379] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" iface="eth0" netns="/var/run/netns/cni-1b15de4b-2a6f-c09b-ade4-a7c8229b15e1" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.543 [INFO][4379] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.543 [INFO][4379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.570 [INFO][4388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.570 [INFO][4388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.570 [INFO][4388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.577 [WARNING][4388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.577 [INFO][4388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.581 [INFO][4388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:16.590085 containerd[1570]: 2024-11-12 20:55:16.586 [INFO][4379] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:16.591183 containerd[1570]: time="2024-11-12T20:55:16.591102897Z" level=info msg="TearDown network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\" successfully" Nov 12 20:55:16.591183 containerd[1570]: time="2024-11-12T20:55:16.591138734Z" level=info msg="StopPodSandbox for \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\" returns successfully" Nov 12 20:55:16.594417 containerd[1570]: time="2024-11-12T20:55:16.594347675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-rq2l5,Uid:d030104e-d3be-4689-840e-40e7cceed6f7,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:55:16.598754 systemd[1]: run-netns-cni\x2d1b15de4b\x2d2a6f\x2dc09b\x2dade4\x2da7c8229b15e1.mount: Deactivated successfully. Nov 12 20:55:16.691924 systemd-networkd[1256]: vxlan.calico: Gained IPv6LL Nov 12 20:55:17.230083 containerd[1570]: time="2024-11-12T20:55:17.230034580Z" level=info msg="StopPodSandbox for \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\"" Nov 12 20:55:17.330129 systemd-networkd[1256]: calic30de1e56b3: Gained IPv6LL Nov 12 20:55:18.229039 containerd[1570]: time="2024-11-12T20:55:18.228806035Z" level=info msg="StopPodSandbox for \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\"" Nov 12 20:55:18.229039 containerd[1570]: time="2024-11-12T20:55:18.228890443Z" level=info msg="StopPodSandbox for \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\"" Nov 12 20:55:18.274542 systemd-networkd[1256]: cali8e2a1019002: Link UP Nov 12 20:55:18.275562 systemd-networkd[1256]: cali8e2a1019002: Gained carrier Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.802 [INFO][4418] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0 calico-apiserver-7669974dd4- calico-apiserver d030104e-d3be-4689-840e-40e7cceed6f7 907 0 2024-11-12 20:54:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7669974dd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7669974dd4-rq2l5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e2a1019002 [] []}} ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.802 [INFO][4418] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.836 [INFO][4441] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" HandleID="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.844 [INFO][4441] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" HandleID="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000288050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7669974dd4-rq2l5", "timestamp":"2024-11-12 20:55:17.836827533 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.844 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.844 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.844 [INFO][4441] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.880 [INFO][4441] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.885 [INFO][4441] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.890 [INFO][4441] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.892 [INFO][4441] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.895 [INFO][4441] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.895 [INFO][4441] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:17.976 [INFO][4441] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:18.071 [INFO][4441] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:18.263 [INFO][4441] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:18.263 [INFO][4441] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" host="localhost" Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:18.263 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:18.810581 containerd[1570]: 2024-11-12 20:55:18.263 [INFO][4441] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" HandleID="k8s-pod-network.56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:18.811307 containerd[1570]: 2024-11-12 20:55:18.271 [INFO][4418] cni-plugin/k8s.go 386: Populated endpoint ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d030104e-d3be-4689-840e-40e7cceed6f7", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7669974dd4-rq2l5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e2a1019002", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:18.811307 containerd[1570]: 2024-11-12 20:55:18.271 [INFO][4418] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:18.811307 containerd[1570]: 2024-11-12 20:55:18.271 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e2a1019002 ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:18.811307 containerd[1570]: 2024-11-12 20:55:18.274 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:18.811307 containerd[1570]: 2024-11-12 20:55:18.274 [INFO][4418] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d030104e-d3be-4689-840e-40e7cceed6f7", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b", Pod:"calico-apiserver-7669974dd4-rq2l5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e2a1019002", MAC:"f6:db:dc:64:a5:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:18.811307 containerd[1570]: 2024-11-12 20:55:18.808 [INFO][4418] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-rq2l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:19.007749 containerd[1570]: time="2024-11-12T20:55:19.007386190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:19.007749 containerd[1570]: time="2024-11-12T20:55:19.007475919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:19.007749 containerd[1570]: time="2024-11-12T20:55:19.007491558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.007749 containerd[1570]: time="2024-11-12T20:55:19.007635799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.798 [INFO][4411] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.798 [INFO][4411] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" iface="eth0" netns="/var/run/netns/cni-375301b9-36a5-72d7-92ef-2032a2cbf9e5" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.799 [INFO][4411] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" iface="eth0" netns="/var/run/netns/cni-375301b9-36a5-72d7-92ef-2032a2cbf9e5" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.799 [INFO][4411] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" iface="eth0" netns="/var/run/netns/cni-375301b9-36a5-72d7-92ef-2032a2cbf9e5" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.799 [INFO][4411] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.799 [INFO][4411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.882 [INFO][4432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:17.882 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:18.263 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:18.808 [WARNING][4432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:18.808 [INFO][4432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:19.000 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.015942 containerd[1570]: 2024-11-12 20:55:19.006 [INFO][4411] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:19.017755 systemd[1]: run-netns-cni\x2d375301b9\x2d36a5\x2d72d7\x2d92ef\x2d2032a2cbf9e5.mount: Deactivated successfully. Nov 12 20:55:19.023223 containerd[1570]: time="2024-11-12T20:55:19.022001545Z" level=info msg="TearDown network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\" successfully" Nov 12 20:55:19.023223 containerd[1570]: time="2024-11-12T20:55:19.022049605Z" level=info msg="StopPodSandbox for \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\" returns successfully" Nov 12 20:55:19.026190 containerd[1570]: time="2024-11-12T20:55:19.025896583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-n7xsq,Uid:c7f0cde4-77ad-4783-bce2-a9599e5f533e,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:55:19.069844 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.015 [INFO][4483] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.016 [INFO][4483] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" iface="eth0" netns="/var/run/netns/cni-7e3797cc-852e-e8b0-28ef-75f7fc1a3dcf" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.016 [INFO][4483] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" iface="eth0" netns="/var/run/netns/cni-7e3797cc-852e-e8b0-28ef-75f7fc1a3dcf" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.020 [INFO][4483] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" iface="eth0" netns="/var/run/netns/cni-7e3797cc-852e-e8b0-28ef-75f7fc1a3dcf" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.020 [INFO][4483] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.020 [INFO][4483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.055 [INFO][4532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.055 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.055 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.064 [WARNING][4532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.064 [INFO][4532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.068 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.085614 containerd[1570]: 2024-11-12 20:55:19.080 [INFO][4483] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:19.086958 containerd[1570]: time="2024-11-12T20:55:19.086565232Z" level=info msg="TearDown network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\" successfully" Nov 12 20:55:19.086958 containerd[1570]: time="2024-11-12T20:55:19.086595820Z" level=info msg="StopPodSandbox for \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\" returns successfully" Nov 12 20:55:19.087072 kubelet[2741]: E1112 20:55:19.087042 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:19.088352 containerd[1570]: time="2024-11-12T20:55:19.087766676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmv8c,Uid:d3e73151-088a-437b-9a45-b13477085c0c,Namespace:kube-system,Attempt:1,}" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.015 [INFO][4490] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.015 [INFO][4490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" iface="eth0" netns="/var/run/netns/cni-3c516009-adb0-5f12-38b3-0213bf7613a1" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.016 [INFO][4490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" iface="eth0" netns="/var/run/netns/cni-3c516009-adb0-5f12-38b3-0213bf7613a1" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.018 [INFO][4490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" iface="eth0" netns="/var/run/netns/cni-3c516009-adb0-5f12-38b3-0213bf7613a1" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.018 [INFO][4490] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.018 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.081 [INFO][4531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.082 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.082 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.088 [WARNING][4531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.088 [INFO][4531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.094 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.102745 containerd[1570]: 2024-11-12 20:55:19.097 [INFO][4490] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:19.103182 containerd[1570]: time="2024-11-12T20:55:19.102899732Z" level=info msg="TearDown network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\" successfully" Nov 12 20:55:19.103182 containerd[1570]: time="2024-11-12T20:55:19.102935038Z" level=info msg="StopPodSandbox for \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\" returns successfully" Nov 12 20:55:19.103225 kubelet[2741]: E1112 20:55:19.103167 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:19.106891 containerd[1570]: time="2024-11-12T20:55:19.104306231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mxktf,Uid:18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4,Namespace:kube-system,Attempt:1,}" Nov 12 20:55:19.116274 containerd[1570]: time="2024-11-12T20:55:19.116227149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-rq2l5,Uid:d030104e-d3be-4689-840e-40e7cceed6f7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b\"" Nov 12 20:55:19.239539 containerd[1570]: time="2024-11-12T20:55:19.239497063Z" level=info msg="StopPodSandbox for \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\"" Nov 12 20:55:19.285578 systemd-networkd[1256]: califea476c4c67: Link UP Nov 12 20:55:19.287132 systemd-networkd[1256]: califea476c4c67: Gained carrier Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.138 [INFO][4560] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0 calico-apiserver-7669974dd4- calico-apiserver c7f0cde4-77ad-4783-bce2-a9599e5f533e 915 0 2024-11-12 20:54:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7669974dd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7669974dd4-n7xsq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califea476c4c67 [] []}} ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.139 [INFO][4560] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.205 [INFO][4594] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" HandleID="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.217 [INFO][4594] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" HandleID="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ac140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7669974dd4-n7xsq", "timestamp":"2024-11-12 20:55:19.205120589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.217 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.217 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.217 [INFO][4594] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.220 [INFO][4594] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.225 [INFO][4594] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.234 [INFO][4594] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.237 [INFO][4594] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.247 [INFO][4594] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.248 [INFO][4594] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.252 [INFO][4594] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5 Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.267 [INFO][4594] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.275 [INFO][4594] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.275 [INFO][4594] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" host="localhost" Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.275 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.333139 containerd[1570]: 2024-11-12 20:55:19.275 [INFO][4594] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" HandleID="k8s-pod-network.31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.333735 containerd[1570]: 2024-11-12 20:55:19.281 [INFO][4560] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7f0cde4-77ad-4783-bce2-a9599e5f533e", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7669974dd4-n7xsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califea476c4c67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.333735 containerd[1570]: 2024-11-12 20:55:19.281 [INFO][4560] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.333735 containerd[1570]: 2024-11-12 20:55:19.281 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califea476c4c67 ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.333735 containerd[1570]: 2024-11-12 20:55:19.288 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.333735 containerd[1570]: 2024-11-12 20:55:19.289 [INFO][4560] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7f0cde4-77ad-4783-bce2-a9599e5f533e", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5", Pod:"calico-apiserver-7669974dd4-n7xsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califea476c4c67", MAC:"da:22:92:39:c2:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.333735 containerd[1570]: 2024-11-12 20:55:19.329 [INFO][4560] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5" Namespace="calico-apiserver" Pod="calico-apiserver-7669974dd4-n7xsq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:19.366060 systemd-networkd[1256]: calie143d59199b: Link UP Nov 12 20:55:19.367672 systemd-networkd[1256]: calie143d59199b: Gained carrier Nov 12 20:55:19.377337 containerd[1570]: time="2024-11-12T20:55:19.375632926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:19.377337 containerd[1570]: time="2024-11-12T20:55:19.375731591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:19.377337 containerd[1570]: time="2024-11-12T20:55:19.375752911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.377337 containerd[1570]: time="2024-11-12T20:55:19.375873448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.168 [INFO][4580] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--fmv8c-eth0 coredns-76f75df574- kube-system d3e73151-088a-437b-9a45-b13477085c0c 926 0 2024-11-12 20:54:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-fmv8c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie143d59199b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.168 [INFO][4580] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.233 [INFO][4614] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" HandleID="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.271 [INFO][4614] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" HandleID="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000365ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-fmv8c", "timestamp":"2024-11-12 20:55:19.228263865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.271 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.275 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.275 [INFO][4614] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.278 [INFO][4614] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.284 [INFO][4614] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.327 [INFO][4614] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.331 [INFO][4614] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.335 [INFO][4614] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.335 [INFO][4614] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.338 [INFO][4614] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365 Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.346 [INFO][4614] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.352 [INFO][4614] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.352 [INFO][4614] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" host="localhost" Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.352 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.385062 containerd[1570]: 2024-11-12 20:55:19.352 [INFO][4614] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" HandleID="k8s-pod-network.c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.385904 containerd[1570]: 2024-11-12 20:55:19.358 [INFO][4580] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--fmv8c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3e73151-088a-437b-9a45-b13477085c0c", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-fmv8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie143d59199b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.385904 containerd[1570]: 2024-11-12 20:55:19.358 [INFO][4580] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.385904 containerd[1570]: 2024-11-12 20:55:19.358 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie143d59199b ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.385904 containerd[1570]: 2024-11-12 20:55:19.367 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.385904 containerd[1570]: 2024-11-12 20:55:19.368 [INFO][4580] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--fmv8c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3e73151-088a-437b-9a45-b13477085c0c", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365", Pod:"coredns-76f75df574-fmv8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie143d59199b", MAC:"8a:68:05:5b:16:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.385904 containerd[1570]: 2024-11-12 20:55:19.380 [INFO][4580] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365" Namespace="kube-system" Pod="coredns-76f75df574-fmv8c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:19.404201 systemd-networkd[1256]: cali62d9a21e570: Link UP Nov 12 20:55:19.404979 systemd-networkd[1256]: cali62d9a21e570: Gained carrier Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.208 [INFO][4600] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--mxktf-eth0 coredns-76f75df574- kube-system 18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4 925 0 2024-11-12 20:54:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-mxktf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali62d9a21e570 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.209 [INFO][4600] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.275 [INFO][4623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" HandleID="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.294 [INFO][4623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" HandleID="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c4c30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-mxktf", "timestamp":"2024-11-12 20:55:19.274933816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.294 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.352 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.352 [INFO][4623] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.354 [INFO][4623] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.359 [INFO][4623] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.367 [INFO][4623] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.371 [INFO][4623] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.375 [INFO][4623] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.375 [INFO][4623] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.378 [INFO][4623] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.383 [INFO][4623] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.391 [INFO][4623] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.391 [INFO][4623] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" host="localhost" Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.391 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.424965 containerd[1570]: 2024-11-12 20:55:19.391 [INFO][4623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" HandleID="k8s-pod-network.d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.425692 containerd[1570]: 2024-11-12 20:55:19.397 [INFO][4600] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mxktf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-mxktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62d9a21e570", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.425692 containerd[1570]: 2024-11-12 20:55:19.397 [INFO][4600] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.425692 containerd[1570]: 2024-11-12 20:55:19.397 [INFO][4600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62d9a21e570 ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.425692 containerd[1570]: 2024-11-12 20:55:19.404 [INFO][4600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.425692 containerd[1570]: 2024-11-12 20:55:19.405 [INFO][4600] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mxktf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba", Pod:"coredns-76f75df574-mxktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62d9a21e570", MAC:"c2:f2:17:9e:14:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.425692 containerd[1570]: 2024-11-12 20:55:19.417 [INFO][4600] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba" Namespace="kube-system" Pod="coredns-76f75df574-mxktf" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:19.427014 containerd[1570]: time="2024-11-12T20:55:19.426405941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:19.427014 containerd[1570]: time="2024-11-12T20:55:19.426503875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:19.427014 containerd[1570]: time="2024-11-12T20:55:19.426523382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.427014 containerd[1570]: time="2024-11-12T20:55:19.426620635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.431580 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.345 [INFO][4647] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.345 [INFO][4647] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" iface="eth0" netns="/var/run/netns/cni-0547ad24-266d-e8b0-8b8d-1c94dccc5d99" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.345 [INFO][4647] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" iface="eth0" netns="/var/run/netns/cni-0547ad24-266d-e8b0-8b8d-1c94dccc5d99" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.346 [INFO][4647] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" iface="eth0" netns="/var/run/netns/cni-0547ad24-266d-e8b0-8b8d-1c94dccc5d99" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.346 [INFO][4647] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.346 [INFO][4647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.394 [INFO][4672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.394 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.394 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.406 [WARNING][4672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.418 [INFO][4672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.422 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.432826 containerd[1570]: 2024-11-12 20:55:19.426 [INFO][4647] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:19.434108 containerd[1570]: time="2024-11-12T20:55:19.433299849Z" level=info msg="TearDown network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\" successfully" Nov 12 20:55:19.434215 containerd[1570]: time="2024-11-12T20:55:19.434195636Z" level=info msg="StopPodSandbox for \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\" returns successfully" Nov 12 20:55:19.435730 containerd[1570]: time="2024-11-12T20:55:19.435694758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll6wf,Uid:3131bc61-5520-4f07-bd62-766f60d48de0,Namespace:calico-system,Attempt:1,}" Nov 12 20:55:19.466985 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:55:19.472325 containerd[1570]: time="2024-11-12T20:55:19.472223047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:19.472470 containerd[1570]: time="2024-11-12T20:55:19.472434174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:19.472637 containerd[1570]: time="2024-11-12T20:55:19.472603332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.473977 containerd[1570]: time="2024-11-12T20:55:19.473275347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.477193 containerd[1570]: time="2024-11-12T20:55:19.477078397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7669974dd4-n7xsq,Uid:c7f0cde4-77ad-4783-bce2-a9599e5f533e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5\"" Nov 12 20:55:19.503395 containerd[1570]: time="2024-11-12T20:55:19.503332573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fmv8c,Uid:d3e73151-088a-437b-9a45-b13477085c0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365\"" Nov 12 20:55:19.504366 kubelet[2741]: E1112 20:55:19.504181 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:19.507757 containerd[1570]: time="2024-11-12T20:55:19.507721947Z" level=info msg="CreateContainer within sandbox \"c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:55:19.516947 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:55:19.554638 containerd[1570]: time="2024-11-12T20:55:19.554565405Z" level=info msg="CreateContainer within sandbox \"c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0193ea9952db84701d46b049b82dadc028fda7921f45f3a25fc63dfdbbd622e\"" Nov 12 20:55:19.556084 containerd[1570]: time="2024-11-12T20:55:19.556053807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mxktf,Uid:18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4,Namespace:kube-system,Attempt:1,} returns sandbox id \"d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba\"" Nov 12 20:55:19.556330 containerd[1570]: time="2024-11-12T20:55:19.556306924Z" level=info msg="StartContainer for \"c0193ea9952db84701d46b049b82dadc028fda7921f45f3a25fc63dfdbbd622e\"" Nov 12 20:55:19.556974 kubelet[2741]: E1112 20:55:19.556939 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:19.560571 containerd[1570]: time="2024-11-12T20:55:19.560437721Z" level=info msg="CreateContainer within sandbox \"d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:55:19.592291 containerd[1570]: time="2024-11-12T20:55:19.592079820Z" level=info msg="CreateContainer within sandbox \"d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0196bcc4c82d512fe20f28cea1cfe73bac6d83188f75b8e684b7c300c20d86d2\"" Nov 12 20:55:19.594046 containerd[1570]: time="2024-11-12T20:55:19.593224655Z" level=info msg="StartContainer for \"0196bcc4c82d512fe20f28cea1cfe73bac6d83188f75b8e684b7c300c20d86d2\"" Nov 12 20:55:19.624821 systemd-networkd[1256]: cali2ae57f85641: Link UP Nov 12 20:55:19.625238 systemd-networkd[1256]: cali2ae57f85641: Gained carrier Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.519 [INFO][4791] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ll6wf-eth0 csi-node-driver- calico-system 3131bc61-5520-4f07-bd62-766f60d48de0 938 0 2024-11-12 20:54:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ll6wf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2ae57f85641 [] []}} ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.520 [INFO][4791] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.566 [INFO][4839] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" HandleID="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.576 [INFO][4839] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" HandleID="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00017e050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ll6wf", "timestamp":"2024-11-12 20:55:19.566436684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.576 [INFO][4839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.576 [INFO][4839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.576 [INFO][4839] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.578 [INFO][4839] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.583 [INFO][4839] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.588 [INFO][4839] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.590 [INFO][4839] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.592 [INFO][4839] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.592 [INFO][4839] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.596 [INFO][4839] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8 Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.602 [INFO][4839] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.611 [INFO][4839] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.611 [INFO][4839] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" host="localhost" Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.611 [INFO][4839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:19.645425 containerd[1570]: 2024-11-12 20:55:19.611 [INFO][4839] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" HandleID="k8s-pod-network.1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.646955 containerd[1570]: 2024-11-12 20:55:19.619 [INFO][4791] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll6wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3131bc61-5520-4f07-bd62-766f60d48de0", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ll6wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ae57f85641", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.646955 containerd[1570]: 2024-11-12 20:55:19.620 [INFO][4791] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.646955 containerd[1570]: 2024-11-12 20:55:19.621 [INFO][4791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ae57f85641 ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.646955 containerd[1570]: 2024-11-12 20:55:19.626 [INFO][4791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.646955 containerd[1570]: 2024-11-12 20:55:19.627 [INFO][4791] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll6wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3131bc61-5520-4f07-bd62-766f60d48de0", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8", Pod:"csi-node-driver-ll6wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ae57f85641", MAC:"aa:95:1d:3a:10:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:19.646955 containerd[1570]: 2024-11-12 20:55:19.639 [INFO][4791] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8" Namespace="calico-system" Pod="csi-node-driver-ll6wf" WorkloadEndpoint="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:19.649551 containerd[1570]: time="2024-11-12T20:55:19.649424798Z" level=info msg="StartContainer for \"c0193ea9952db84701d46b049b82dadc028fda7921f45f3a25fc63dfdbbd622e\" returns successfully" Nov 12 20:55:19.670514 containerd[1570]: time="2024-11-12T20:55:19.670467232Z" level=info msg="StartContainer for \"0196bcc4c82d512fe20f28cea1cfe73bac6d83188f75b8e684b7c300c20d86d2\" returns successfully" Nov 12 20:55:19.681747 containerd[1570]: time="2024-11-12T20:55:19.681554836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:19.681747 containerd[1570]: time="2024-11-12T20:55:19.681605702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:19.681747 containerd[1570]: time="2024-11-12T20:55:19.681619889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.681747 containerd[1570]: time="2024-11-12T20:55:19.681719366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:19.709590 systemd-resolved[1457]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 12 20:55:19.731458 containerd[1570]: time="2024-11-12T20:55:19.731364819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll6wf,Uid:3131bc61-5520-4f07-bd62-766f60d48de0,Namespace:calico-system,Attempt:1,} returns sandbox id \"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8\"" Nov 12 20:55:20.027769 systemd[1]: run-netns-cni\x2d0547ad24\x2d266d\x2de8b0\x2d8b8d\x2d1c94dccc5d99.mount: Deactivated successfully. Nov 12 20:55:20.028034 systemd[1]: run-netns-cni\x2d7e3797cc\x2d852e\x2de8b0\x2d28ef\x2d75f7fc1a3dcf.mount: Deactivated successfully. Nov 12 20:55:20.028230 systemd[1]: run-netns-cni\x2d3c516009\x2dadb0\x2d5f12\x2d38b3\x2d0213bf7613a1.mount: Deactivated successfully. Nov 12 20:55:20.082159 systemd-networkd[1256]: cali8e2a1019002: Gained IPv6LL Nov 12 20:55:20.212569 kubelet[2741]: E1112 20:55:20.212525 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:20.220136 kubelet[2741]: E1112 20:55:20.219968 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:20.244290 kubelet[2741]: I1112 20:55:20.244052 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mxktf" podStartSLOduration=45.24401507 podStartE2EDuration="45.24401507s" podCreationTimestamp="2024-11-12 20:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:20.226312157 +0000 UTC m=+57.116817812" watchObservedRunningTime="2024-11-12 20:55:20.24401507 +0000 UTC m=+57.134520725" Nov 12 20:55:20.338106 systemd-networkd[1256]: califea476c4c67: Gained IPv6LL Nov 12 20:55:20.492108 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:40594.service - OpenSSH per-connection server daemon (10.0.0.1:40594). Nov 12 20:55:20.530103 systemd-networkd[1256]: calie143d59199b: Gained IPv6LL Nov 12 20:55:20.638051 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 40594 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:20.640276 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:20.646396 systemd-logind[1552]: New session 12 of user core. Nov 12 20:55:20.654417 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:55:20.660533 containerd[1570]: time="2024-11-12T20:55:20.660474181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:20.661547 containerd[1570]: time="2024-11-12T20:55:20.661505154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:55:20.662758 containerd[1570]: time="2024-11-12T20:55:20.662722426Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:20.665290 containerd[1570]: time="2024-11-12T20:55:20.665247988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:20.665889 containerd[1570]: time="2024-11-12T20:55:20.665858468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 4.510328001s" Nov 12 20:55:20.665948 containerd[1570]: time="2024-11-12T20:55:20.665894066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:55:20.666503 containerd[1570]: time="2024-11-12T20:55:20.666447656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:55:20.675238 containerd[1570]: time="2024-11-12T20:55:20.675045212Z" level=info msg="CreateContainer within sandbox \"14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:55:20.690571 containerd[1570]: time="2024-11-12T20:55:20.690524318Z" level=info msg="CreateContainer within sandbox \"14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"13557e3030fa4092f816abc3e04d7d58440fab4effbdb3573daf4891d24ffef4\"" Nov 12 20:55:20.691196 containerd[1570]: time="2024-11-12T20:55:20.691142293Z" level=info msg="StartContainer for \"13557e3030fa4092f816abc3e04d7d58440fab4effbdb3573daf4891d24ffef4\"" Nov 12 20:55:20.794674 sshd[4998]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:20.802145 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:40602.service - OpenSSH per-connection server daemon (10.0.0.1:40602). Nov 12 20:55:20.802627 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:40594.service: Deactivated successfully. Nov 12 20:55:20.807011 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:55:20.807958 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:55:20.809099 systemd-logind[1552]: Removed session 12. Nov 12 20:55:20.850191 systemd-networkd[1256]: cali2ae57f85641: Gained IPv6LL Nov 12 20:55:20.904905 containerd[1570]: time="2024-11-12T20:55:20.904635025Z" level=info msg="StartContainer for \"13557e3030fa4092f816abc3e04d7d58440fab4effbdb3573daf4891d24ffef4\" returns successfully" Nov 12 20:55:20.914581 systemd-networkd[1256]: cali62d9a21e570: Gained IPv6LL Nov 12 20:55:20.936200 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 40602 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:20.939942 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:20.948492 systemd-logind[1552]: New session 13 of user core. Nov 12 20:55:20.956324 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:55:21.128694 sshd[5046]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:21.136323 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:40610.service - OpenSSH per-connection server daemon (10.0.0.1:40610). Nov 12 20:55:21.136881 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:40602.service: Deactivated successfully. Nov 12 20:55:21.146030 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:55:21.150414 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:55:21.154506 systemd-logind[1552]: Removed session 13. Nov 12 20:55:21.185756 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 40610 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:21.188055 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:21.194101 systemd-logind[1552]: New session 14 of user core. Nov 12 20:55:21.206460 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:55:21.224573 kubelet[2741]: E1112 20:55:21.224201 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:21.225161 kubelet[2741]: E1112 20:55:21.224609 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:21.235710 kubelet[2741]: I1112 20:55:21.235650 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f54cd56f9-7gnfq" podStartSLOduration=30.724605466 podStartE2EDuration="35.235598586s" podCreationTimestamp="2024-11-12 20:54:46 +0000 UTC" firstStartedPulling="2024-11-12 20:55:16.155230202 +0000 UTC m=+53.045735857" lastFinishedPulling="2024-11-12 20:55:20.666223322 +0000 UTC m=+57.556728977" observedRunningTime="2024-11-12 20:55:21.235132857 +0000 UTC m=+58.125638512" watchObservedRunningTime="2024-11-12 20:55:21.235598586 +0000 UTC m=+58.126104241" Nov 12 20:55:21.236055 kubelet[2741]: I1112 20:55:21.236025 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fmv8c" podStartSLOduration=46.236000853 podStartE2EDuration="46.236000853s" podCreationTimestamp="2024-11-12 20:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:20.261376254 +0000 UTC m=+57.151881909" watchObservedRunningTime="2024-11-12 20:55:21.236000853 +0000 UTC m=+58.126506508" Nov 12 20:55:21.351210 sshd[5059]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:21.356794 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:40610.service: Deactivated successfully. Nov 12 20:55:21.360266 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:55:21.361194 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:55:21.362252 systemd-logind[1552]: Removed session 14. Nov 12 20:55:22.231816 kubelet[2741]: E1112 20:55:22.231692 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:22.258404 systemd[1]: run-containerd-runc-k8s.io-13557e3030fa4092f816abc3e04d7d58440fab4effbdb3573daf4891d24ffef4-runc.LXKUR7.mount: Deactivated successfully. Nov 12 20:55:23.221690 containerd[1570]: time="2024-11-12T20:55:23.221617789Z" level=info msg="StopPodSandbox for \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\"" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.266 [WARNING][5115] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mxktf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba", Pod:"coredns-76f75df574-mxktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62d9a21e570", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.266 [INFO][5115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.266 [INFO][5115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" iface="eth0" netns="" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.266 [INFO][5115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.266 [INFO][5115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.295 [INFO][5125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.295 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.295 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.300 [WARNING][5125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.300 [INFO][5125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.302 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:23.308946 containerd[1570]: 2024-11-12 20:55:23.305 [INFO][5115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.309519 containerd[1570]: time="2024-11-12T20:55:23.308955450Z" level=info msg="TearDown network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\" successfully" Nov 12 20:55:23.309519 containerd[1570]: time="2024-11-12T20:55:23.308985178Z" level=info msg="StopPodSandbox for \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\" returns successfully" Nov 12 20:55:23.317101 containerd[1570]: time="2024-11-12T20:55:23.317043284Z" level=info msg="RemovePodSandbox for \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\"" Nov 12 20:55:23.321541 containerd[1570]: time="2024-11-12T20:55:23.321496271Z" level=info msg="Forcibly stopping sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\"" Nov 12 20:55:23.337814 containerd[1570]: time="2024-11-12T20:55:23.337754439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:23.338928 containerd[1570]: time="2024-11-12T20:55:23.338847347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:55:23.340485 containerd[1570]: time="2024-11-12T20:55:23.340449416Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:23.343843 containerd[1570]: time="2024-11-12T20:55:23.343791800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:23.344350 containerd[1570]: time="2024-11-12T20:55:23.344311062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.677820051s" Nov 12 20:55:23.344441 containerd[1570]: time="2024-11-12T20:55:23.344394562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:55:23.346383 containerd[1570]: time="2024-11-12T20:55:23.346355994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:55:23.349193 containerd[1570]: time="2024-11-12T20:55:23.347838504Z" level=info msg="CreateContainer within sandbox \"56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:55:23.366827 containerd[1570]: time="2024-11-12T20:55:23.366779975Z" level=info msg="CreateContainer within sandbox \"56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9df190fce28e107f2033cc930ca41ab4367325baf1dc0cb90258415fb1b879fe\"" Nov 12 20:55:23.368034 containerd[1570]: time="2024-11-12T20:55:23.367981812Z" level=info msg="StartContainer for \"9df190fce28e107f2033cc930ca41ab4367325baf1dc0cb90258415fb1b879fe\"" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.362 [WARNING][5151] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--mxktf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"18cc20c4-abd6-46ab-a97e-0b6b0d9a58b4", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1218f572e3087fa3af3e979e6cf5da36d33fd0ce1942ec8b174da22558078ba", Pod:"coredns-76f75df574-mxktf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62d9a21e570", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.363 [INFO][5151] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.363 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" iface="eth0" netns="" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.363 [INFO][5151] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.363 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.388 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.389 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.389 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.397 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.397 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" HandleID="k8s-pod-network.9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Workload="localhost-k8s-coredns--76f75df574--mxktf-eth0" Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.399 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:23.405564 containerd[1570]: 2024-11-12 20:55:23.401 [INFO][5151] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace" Nov 12 20:55:23.406159 containerd[1570]: time="2024-11-12T20:55:23.405588274Z" level=info msg="TearDown network for sandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\" successfully" Nov 12 20:55:23.417693 containerd[1570]: time="2024-11-12T20:55:23.417646524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:23.417820 containerd[1570]: time="2024-11-12T20:55:23.417728623Z" level=info msg="RemovePodSandbox \"9e59720617c5f1571fd6a1f1e2e89b74771d3af4cdd4db7fcba0d5dada7ebace\" returns successfully" Nov 12 20:55:23.418496 containerd[1570]: time="2024-11-12T20:55:23.418462828Z" level=info msg="StopPodSandbox for \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\"" Nov 12 20:55:23.461014 containerd[1570]: time="2024-11-12T20:55:23.460853354Z" level=info msg="StartContainer for \"9df190fce28e107f2033cc930ca41ab4367325baf1dc0cb90258415fb1b879fe\" returns successfully" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.472 [WARNING][5205] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d030104e-d3be-4689-840e-40e7cceed6f7", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b", Pod:"calico-apiserver-7669974dd4-rq2l5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e2a1019002", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.474 [INFO][5205] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.475 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" iface="eth0" netns="" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.475 [INFO][5205] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.475 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.502 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.502 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.502 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.508 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.508 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.510 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:23.515960 containerd[1570]: 2024-11-12 20:55:23.513 [INFO][5205] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.515960 containerd[1570]: time="2024-11-12T20:55:23.515881798Z" level=info msg="TearDown network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\" successfully" Nov 12 20:55:23.515960 containerd[1570]: time="2024-11-12T20:55:23.515932977Z" level=info msg="StopPodSandbox for \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\" returns successfully" Nov 12 20:55:23.516963 containerd[1570]: time="2024-11-12T20:55:23.516878240Z" level=info msg="RemovePodSandbox for \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\"" Nov 12 20:55:23.516963 containerd[1570]: time="2024-11-12T20:55:23.516942543Z" level=info msg="Forcibly stopping sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\"" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.557 [WARNING][5249] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d030104e-d3be-4689-840e-40e7cceed6f7", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56cd71a9e7ca8e1b835a189ed0c204d391ea77d84b38a90ec84c336ff5dae82b", Pod:"calico-apiserver-7669974dd4-rq2l5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e2a1019002", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.557 [INFO][5249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.557 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" iface="eth0" netns="" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.557 [INFO][5249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.557 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.580 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.580 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.580 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.587 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.587 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" HandleID="k8s-pod-network.4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Workload="localhost-k8s-calico--apiserver--7669974dd4--rq2l5-eth0" Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.589 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:23.594230 containerd[1570]: 2024-11-12 20:55:23.591 [INFO][5249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc" Nov 12 20:55:23.594973 containerd[1570]: time="2024-11-12T20:55:23.594257339Z" level=info msg="TearDown network for sandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\" successfully" Nov 12 20:55:23.671278 containerd[1570]: time="2024-11-12T20:55:23.671170029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:23.671278 containerd[1570]: time="2024-11-12T20:55:23.671293216Z" level=info msg="RemovePodSandbox \"4871b198f209b8214e68b7355fbcfb61a07fc4c5bb825554143fe3403e8d05dc\" returns successfully" Nov 12 20:55:23.671998 containerd[1570]: time="2024-11-12T20:55:23.671941818Z" level=info msg="StopPodSandbox for \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\"" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.712 [WARNING][5279] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7f0cde4-77ad-4783-bce2-a9599e5f533e", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5", Pod:"calico-apiserver-7669974dd4-n7xsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califea476c4c67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.713 [INFO][5279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.713 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" iface="eth0" netns="" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.713 [INFO][5279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.713 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.734 [INFO][5286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.734 [INFO][5286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.734 [INFO][5286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.740 [WARNING][5286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.740 [INFO][5286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.742 [INFO][5286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:23.748031 containerd[1570]: 2024-11-12 20:55:23.745 [INFO][5279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.748545 containerd[1570]: time="2024-11-12T20:55:23.748073100Z" level=info msg="TearDown network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\" successfully" Nov 12 20:55:23.748545 containerd[1570]: time="2024-11-12T20:55:23.748095344Z" level=info msg="StopPodSandbox for \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\" returns successfully" Nov 12 20:55:23.748660 containerd[1570]: time="2024-11-12T20:55:23.748632770Z" level=info msg="RemovePodSandbox for \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\"" Nov 12 20:55:23.748699 containerd[1570]: time="2024-11-12T20:55:23.748667226Z" level=info msg="Forcibly stopping sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\"" Nov 12 20:55:23.764208 containerd[1570]: time="2024-11-12T20:55:23.763816086Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:23.766683 containerd[1570]: time="2024-11-12T20:55:23.764724747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:55:23.766683 containerd[1570]: time="2024-11-12T20:55:23.766523636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 420.136632ms" Nov 12 20:55:23.766683 containerd[1570]: time="2024-11-12T20:55:23.766550267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:55:23.769108 containerd[1570]: time="2024-11-12T20:55:23.769083542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:55:23.772683 containerd[1570]: time="2024-11-12T20:55:23.772657824Z" level=info msg="CreateContainer within sandbox \"31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:55:23.789202 containerd[1570]: time="2024-11-12T20:55:23.789153561Z" level=info msg="CreateContainer within sandbox \"31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e681a4323be3bb6c67c95f9cb68fdbbac237f8d53cac51a1dc1faa5040d6ecd5\"" Nov 12 20:55:23.790891 containerd[1570]: time="2024-11-12T20:55:23.790840694Z" level=info msg="StartContainer for \"e681a4323be3bb6c67c95f9cb68fdbbac237f8d53cac51a1dc1faa5040d6ecd5\"" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.800 [WARNING][5309] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0", GenerateName:"calico-apiserver-7669974dd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7f0cde4-77ad-4783-bce2-a9599e5f533e", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7669974dd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31d893bd0c5a17dc08b30e7131d093139dc03dfc06d523dfc6983b792e4b53c5", Pod:"calico-apiserver-7669974dd4-n7xsq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califea476c4c67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.800 [INFO][5309] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.800 [INFO][5309] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" iface="eth0" netns="" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.800 [INFO][5309] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.800 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.825 [INFO][5323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.825 [INFO][5323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.825 [INFO][5323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.833 [WARNING][5323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.833 [INFO][5323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" HandleID="k8s-pod-network.faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Workload="localhost-k8s-calico--apiserver--7669974dd4--n7xsq-eth0" Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.834 [INFO][5323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:23.839963 containerd[1570]: 2024-11-12 20:55:23.837 [INFO][5309] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980" Nov 12 20:55:23.840487 containerd[1570]: time="2024-11-12T20:55:23.839996073Z" level=info msg="TearDown network for sandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\" successfully" Nov 12 20:55:23.852568 containerd[1570]: time="2024-11-12T20:55:23.852516625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:23.852694 containerd[1570]: time="2024-11-12T20:55:23.852598373Z" level=info msg="RemovePodSandbox \"faec499f9728c1ca54ceaf03e7b57b8b03662d6fd6a8ad02d71e5aaa3eb48980\" returns successfully" Nov 12 20:55:23.853228 containerd[1570]: time="2024-11-12T20:55:23.853179043Z" level=info msg="StopPodSandbox for \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\"" Nov 12 20:55:23.887772 containerd[1570]: time="2024-11-12T20:55:23.887708591Z" level=info msg="StartContainer for \"e681a4323be3bb6c67c95f9cb68fdbbac237f8d53cac51a1dc1faa5040d6ecd5\" returns successfully" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.897 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--fmv8c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3e73151-088a-437b-9a45-b13477085c0c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365", Pod:"coredns-76f75df574-fmv8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie143d59199b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.897 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.897 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" iface="eth0" netns="" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.897 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.897 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.927 [INFO][5378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.928 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.928 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.934 [WARNING][5378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.934 [INFO][5378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.935 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:23.941887 containerd[1570]: 2024-11-12 20:55:23.938 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:23.942596 containerd[1570]: time="2024-11-12T20:55:23.941944517Z" level=info msg="TearDown network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\" successfully" Nov 12 20:55:23.942596 containerd[1570]: time="2024-11-12T20:55:23.941974744Z" level=info msg="StopPodSandbox for \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\" returns successfully" Nov 12 20:55:23.942815 containerd[1570]: time="2024-11-12T20:55:23.942773355Z" level=info msg="RemovePodSandbox for \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\"" Nov 12 20:55:23.942876 containerd[1570]: time="2024-11-12T20:55:23.942814605Z" level=info msg="Forcibly stopping sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\"" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:23.990 [WARNING][5405] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--fmv8c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d3e73151-088a-437b-9a45-b13477085c0c", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1920ee7455be820559d0baef6c811d6bfc4f490dcd8cb40bc0926312422b365", Pod:"coredns-76f75df574-fmv8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie143d59199b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:23.991 [INFO][5405] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:23.991 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" iface="eth0" netns="" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:23.991 [INFO][5405] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:23.991 [INFO][5405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:24.020 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:24.020 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:24.020 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:24.047 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:24.047 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" HandleID="k8s-pod-network.eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Workload="localhost-k8s-coredns--76f75df574--fmv8c-eth0" Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:24.049 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:24.055018 containerd[1570]: 2024-11-12 20:55:24.052 [INFO][5405] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10" Nov 12 20:55:24.055539 containerd[1570]: time="2024-11-12T20:55:24.055071006Z" level=info msg="TearDown network for sandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\" successfully" Nov 12 20:55:24.098703 containerd[1570]: time="2024-11-12T20:55:24.098550140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:24.098703 containerd[1570]: time="2024-11-12T20:55:24.098647277Z" level=info msg="RemovePodSandbox \"eee2e8248bfe9e50ca827a485b2645190f7a9014b6987eb6cf6f3c1fd2a17d10\" returns successfully" Nov 12 20:55:24.099639 containerd[1570]: time="2024-11-12T20:55:24.099574002Z" level=info msg="StopPodSandbox for \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\"" Nov 12 20:55:24.351746 kubelet[2741]: I1112 20:55:24.348888 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7669974dd4-n7xsq" podStartSLOduration=34.060658708 podStartE2EDuration="38.348845026s" podCreationTimestamp="2024-11-12 20:54:46 +0000 UTC" firstStartedPulling="2024-11-12 20:55:19.478995447 +0000 UTC m=+56.369501112" lastFinishedPulling="2024-11-12 20:55:23.767181775 +0000 UTC m=+60.657687430" observedRunningTime="2024-11-12 20:55:24.348436679 +0000 UTC m=+61.238942334" watchObservedRunningTime="2024-11-12 20:55:24.348845026 +0000 UTC m=+61.239350681" Nov 12 20:55:24.543592 kubelet[2741]: I1112 20:55:24.543524 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7669974dd4-rq2l5" podStartSLOduration=34.316990364 podStartE2EDuration="38.543467716s" podCreationTimestamp="2024-11-12 20:54:46 +0000 UTC" firstStartedPulling="2024-11-12 20:55:19.118301642 +0000 UTC m=+56.008807297" lastFinishedPulling="2024-11-12 20:55:23.344778994 +0000 UTC m=+60.235284649" observedRunningTime="2024-11-12 20:55:24.541288938 +0000 UTC m=+61.431794593" watchObservedRunningTime="2024-11-12 20:55:24.543467716 +0000 UTC m=+61.433973391" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.287 [WARNING][5437] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0", GenerateName:"calico-kube-controllers-6f54cd56f9-", Namespace:"calico-system", SelfLink:"", UID:"2cd83f6e-605c-4278-926b-b78b4419f8ae", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f54cd56f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379", Pod:"calico-kube-controllers-6f54cd56f9-7gnfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic30de1e56b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.287 [INFO][5437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.287 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" iface="eth0" netns="" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.287 [INFO][5437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.287 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.313 [INFO][5445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.313 [INFO][5445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.313 [INFO][5445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.353 [WARNING][5445] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.353 [INFO][5445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.541 [INFO][5445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:24.550350 containerd[1570]: 2024-11-12 20:55:24.547 [INFO][5437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.551412 containerd[1570]: time="2024-11-12T20:55:24.550378461Z" level=info msg="TearDown network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\" successfully" Nov 12 20:55:24.551412 containerd[1570]: time="2024-11-12T20:55:24.550407046Z" level=info msg="StopPodSandbox for \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\" returns successfully" Nov 12 20:55:24.551412 containerd[1570]: time="2024-11-12T20:55:24.551051267Z" level=info msg="RemovePodSandbox for \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\"" Nov 12 20:55:24.551412 containerd[1570]: time="2024-11-12T20:55:24.551083950Z" level=info msg="Forcibly stopping sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\"" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.606 [WARNING][5469] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0", GenerateName:"calico-kube-controllers-6f54cd56f9-", Namespace:"calico-system", SelfLink:"", UID:"2cd83f6e-605c-4278-926b-b78b4419f8ae", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f54cd56f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14f3c593a05ff9b1c80727155305042278cab07782fc7c604c08a8b9105d5379", Pod:"calico-kube-controllers-6f54cd56f9-7gnfq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic30de1e56b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.606 [INFO][5469] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.606 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" iface="eth0" netns="" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.606 [INFO][5469] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.606 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.630 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.630 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.630 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.660 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.661 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" HandleID="k8s-pod-network.e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Workload="localhost-k8s-calico--kube--controllers--6f54cd56f9--7gnfq-eth0" Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.663 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:24.669274 containerd[1570]: 2024-11-12 20:55:24.666 [INFO][5469] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c" Nov 12 20:55:24.669274 containerd[1570]: time="2024-11-12T20:55:24.669224688Z" level=info msg="TearDown network for sandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\" successfully" Nov 12 20:55:25.213215 containerd[1570]: time="2024-11-12T20:55:25.213139136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:25.213466 containerd[1570]: time="2024-11-12T20:55:25.213254848Z" level=info msg="RemovePodSandbox \"e183c01ad65463f0ca5521927f6e81a91e7bcd52f734912cd8e0837fe70ae86c\" returns successfully" Nov 12 20:55:25.214378 containerd[1570]: time="2024-11-12T20:55:25.214322414Z" level=info msg="StopPodSandbox for \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\"" Nov 12 20:55:25.270711 kubelet[2741]: I1112 20:55:25.269958 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:25.270711 kubelet[2741]: I1112 20:55:25.270341 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.257 [WARNING][5503] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll6wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3131bc61-5520-4f07-bd62-766f60d48de0", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8", Pod:"csi-node-driver-ll6wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ae57f85641", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.257 [INFO][5503] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.257 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" iface="eth0" netns="" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.257 [INFO][5503] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.257 [INFO][5503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.288 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.289 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.289 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.296 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.296 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.299 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:25.306849 containerd[1570]: 2024-11-12 20:55:25.303 [INFO][5503] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.307381 containerd[1570]: time="2024-11-12T20:55:25.306894852Z" level=info msg="TearDown network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\" successfully" Nov 12 20:55:25.307381 containerd[1570]: time="2024-11-12T20:55:25.306944487Z" level=info msg="StopPodSandbox for \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\" returns successfully" Nov 12 20:55:25.307513 containerd[1570]: time="2024-11-12T20:55:25.307472825Z" level=info msg="RemovePodSandbox for \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\"" Nov 12 20:55:25.307742 containerd[1570]: time="2024-11-12T20:55:25.307719450Z" level=info msg="Forcibly stopping sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\"" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.357 [WARNING][5535] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ll6wf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3131bc61-5520-4f07-bd62-766f60d48de0", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8", Pod:"csi-node-driver-ll6wf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ae57f85641", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.358 [INFO][5535] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.358 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" iface="eth0" netns="" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.358 [INFO][5535] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.358 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.381 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.382 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.382 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.393 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.394 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" HandleID="k8s-pod-network.161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Workload="localhost-k8s-csi--node--driver--ll6wf-eth0" Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.396 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:25.401618 containerd[1570]: 2024-11-12 20:55:25.399 [INFO][5535] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3" Nov 12 20:55:25.402140 containerd[1570]: time="2024-11-12T20:55:25.401665241Z" level=info msg="TearDown network for sandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\" successfully" Nov 12 20:55:25.655091 containerd[1570]: time="2024-11-12T20:55:25.655032832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:25.655660 containerd[1570]: time="2024-11-12T20:55:25.655120821Z" level=info msg="RemovePodSandbox \"161ebb2da18cc0993bb5826c44b1bc9c0c4b8b207a69e191313b89c80155b7d3\" returns successfully" Nov 12 20:55:26.362125 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:57436.service - OpenSSH per-connection server daemon (10.0.0.1:57436). Nov 12 20:55:26.529392 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 57436 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:26.531405 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:26.536317 systemd-logind[1552]: New session 15 of user core. Nov 12 20:55:26.545205 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:55:26.683339 sshd[5559]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:26.689871 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:57436.service: Deactivated successfully. Nov 12 20:55:26.696557 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:55:26.698608 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:55:26.700787 systemd-logind[1552]: Removed session 15. Nov 12 20:55:26.743722 containerd[1570]: time="2024-11-12T20:55:26.743665345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:26.744965 containerd[1570]: time="2024-11-12T20:55:26.744919417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:55:26.746385 containerd[1570]: time="2024-11-12T20:55:26.746347555Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:26.752143 containerd[1570]: time="2024-11-12T20:55:26.752110173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:26.753018 containerd[1570]: time="2024-11-12T20:55:26.752968906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 2.983486934s" Nov 12 20:55:26.753018 containerd[1570]: time="2024-11-12T20:55:26.753012469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:55:26.754898 containerd[1570]: time="2024-11-12T20:55:26.754853692Z" level=info msg="CreateContainer within sandbox \"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:55:26.776557 containerd[1570]: time="2024-11-12T20:55:26.776487176Z" level=info msg="CreateContainer within sandbox \"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c427cdc6783fb1293459ad1ce19ba70180f154470f321121cb964c807afcc3d6\"" Nov 12 20:55:26.777324 containerd[1570]: time="2024-11-12T20:55:26.777287806Z" level=info msg="StartContainer for \"c427cdc6783fb1293459ad1ce19ba70180f154470f321121cb964c807afcc3d6\"" Nov 12 20:55:26.872744 containerd[1570]: time="2024-11-12T20:55:26.872675938Z" level=info msg="StartContainer for \"c427cdc6783fb1293459ad1ce19ba70180f154470f321121cb964c807afcc3d6\" returns successfully" Nov 12 20:55:26.874824 containerd[1570]: time="2024-11-12T20:55:26.874787691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:55:28.388581 containerd[1570]: time="2024-11-12T20:55:28.388516104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.389350 containerd[1570]: time="2024-11-12T20:55:28.389256928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:55:28.390470 containerd[1570]: time="2024-11-12T20:55:28.390422688Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.392864 containerd[1570]: time="2024-11-12T20:55:28.392819062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:28.393432 containerd[1570]: time="2024-11-12T20:55:28.393395931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.518438803s" Nov 12 20:55:28.393487 containerd[1570]: time="2024-11-12T20:55:28.393431439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:55:28.395590 containerd[1570]: time="2024-11-12T20:55:28.395547586Z" level=info msg="CreateContainer within sandbox \"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:55:29.142173 containerd[1570]: time="2024-11-12T20:55:29.142079460Z" level=info msg="CreateContainer within sandbox \"1f56c2c4b843a3ab1c3983bdf385a5ae0ea0c98b62447dc36b2da002af0297a8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1b674cefd8f1d6f79aebbe1d747c00cd689c4d31ffa529933a4726dc0db850c4\"" Nov 12 20:55:29.143007 containerd[1570]: time="2024-11-12T20:55:29.142962827Z" level=info msg="StartContainer for \"1b674cefd8f1d6f79aebbe1d747c00cd689c4d31ffa529933a4726dc0db850c4\"" Nov 12 20:55:29.267744 containerd[1570]: time="2024-11-12T20:55:29.267688391Z" level=info msg="StartContainer for \"1b674cefd8f1d6f79aebbe1d747c00cd689c4d31ffa529933a4726dc0db850c4\" returns successfully" Nov 12 20:55:29.294483 kubelet[2741]: I1112 20:55:29.294392 2741 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-ll6wf" podStartSLOduration=34.633972678 podStartE2EDuration="43.294351132s" podCreationTimestamp="2024-11-12 20:54:46 +0000 UTC" firstStartedPulling="2024-11-12 20:55:19.733343273 +0000 UTC m=+56.623848928" lastFinishedPulling="2024-11-12 20:55:28.393721727 +0000 UTC m=+65.284227382" observedRunningTime="2024-11-12 20:55:29.294192127 +0000 UTC m=+66.184697782" watchObservedRunningTime="2024-11-12 20:55:29.294351132 +0000 UTC m=+66.184856787" Nov 12 20:55:29.326591 kubelet[2741]: I1112 20:55:29.326539 2741 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:55:29.327791 kubelet[2741]: I1112 20:55:29.327751 2741 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:55:29.358558 kubelet[2741]: I1112 20:55:29.358503 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:31.700151 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:57450.service - OpenSSH per-connection server daemon (10.0.0.1:57450). Nov 12 20:55:31.735147 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 57450 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:31.736975 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:31.741834 systemd-logind[1552]: New session 16 of user core. Nov 12 20:55:31.752192 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:55:31.893398 sshd[5678]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:31.899395 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:57450.service: Deactivated successfully. Nov 12 20:55:31.902209 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:55:31.902252 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:55:31.903557 systemd-logind[1552]: Removed session 16. Nov 12 20:55:36.904356 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:43170.service - OpenSSH per-connection server daemon (10.0.0.1:43170). Nov 12 20:55:36.944791 sshd[5721]: Accepted publickey for core from 10.0.0.1 port 43170 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:36.948243 sshd[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:36.957416 systemd-logind[1552]: New session 17 of user core. Nov 12 20:55:36.968571 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:55:37.106424 sshd[5721]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:37.112209 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:43170.service: Deactivated successfully. Nov 12 20:55:37.114897 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:55:37.115006 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:55:37.116405 systemd-logind[1552]: Removed session 17. Nov 12 20:55:42.123306 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:43184.service - OpenSSH per-connection server daemon (10.0.0.1:43184). Nov 12 20:55:42.156588 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 43184 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:42.158431 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:42.162785 systemd-logind[1552]: New session 18 of user core. Nov 12 20:55:42.177282 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:55:42.291958 sshd[5738]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:42.300364 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:43190.service - OpenSSH per-connection server daemon (10.0.0.1:43190). Nov 12 20:55:42.300970 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:43184.service: Deactivated successfully. Nov 12 20:55:42.305532 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:55:42.306616 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:55:42.307885 systemd-logind[1552]: Removed session 18. Nov 12 20:55:42.335258 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 43190 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:42.337373 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:42.342554 systemd-logind[1552]: New session 19 of user core. Nov 12 20:55:42.351506 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:55:42.985978 sshd[5751]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:42.993196 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:43192.service - OpenSSH per-connection server daemon (10.0.0.1:43192). Nov 12 20:55:42.993693 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:43190.service: Deactivated successfully. Nov 12 20:55:42.996677 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:55:42.997606 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:55:42.999607 systemd-logind[1552]: Removed session 19. Nov 12 20:55:43.028145 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 43192 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:43.030080 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:43.034903 systemd-logind[1552]: New session 20 of user core. Nov 12 20:55:43.048276 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:55:45.230056 kubelet[2741]: E1112 20:55:45.229988 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:47.938572 kubelet[2741]: I1112 20:55:47.938515 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:47.993545 sshd[5764]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:48.003280 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:42408.service - OpenSSH per-connection server daemon (10.0.0.1:42408). Nov 12 20:55:48.003943 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:43192.service: Deactivated successfully. Nov 12 20:55:48.008039 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:55:48.008812 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:55:48.009883 systemd-logind[1552]: Removed session 20. Nov 12 20:55:48.040579 sshd[5806]: Accepted publickey for core from 10.0.0.1 port 42408 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:48.042572 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:48.047273 systemd-logind[1552]: New session 21 of user core. Nov 12 20:55:48.057342 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:55:48.796960 sshd[5806]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:48.803156 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:42418.service - OpenSSH per-connection server daemon (10.0.0.1:42418). Nov 12 20:55:48.803831 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:42408.service: Deactivated successfully. Nov 12 20:55:48.808614 systemd-logind[1552]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:55:48.809608 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:55:48.810773 systemd-logind[1552]: Removed session 21. Nov 12 20:55:48.835420 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 42418 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:48.837158 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:48.841692 systemd-logind[1552]: New session 22 of user core. Nov 12 20:55:48.848250 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:55:49.023977 sshd[5819]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:49.029517 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:42418.service: Deactivated successfully. Nov 12 20:55:49.033535 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:55:49.034429 systemd-logind[1552]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:55:49.035469 systemd-logind[1552]: Removed session 22. Nov 12 20:55:54.038324 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:42456.service - OpenSSH per-connection server daemon (10.0.0.1:42456). Nov 12 20:55:54.069674 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 42456 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:54.071429 sshd[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:54.076099 systemd-logind[1552]: New session 23 of user core. Nov 12 20:55:54.089279 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:55:54.200439 sshd[5839]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:54.204508 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:42456.service: Deactivated successfully. Nov 12 20:55:54.208444 systemd-logind[1552]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:55:54.209258 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:55:54.210248 systemd-logind[1552]: Removed session 23. Nov 12 20:55:58.229488 kubelet[2741]: E1112 20:55:58.229421 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:58.648142 kubelet[2741]: E1112 20:55:58.648112 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:55:59.214189 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:54092.service - OpenSSH per-connection server daemon (10.0.0.1:54092). Nov 12 20:55:59.255530 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 54092 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:55:59.257570 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:59.262070 systemd-logind[1552]: New session 24 of user core. Nov 12 20:55:59.273288 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:55:59.521401 sshd[5881]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:59.525280 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:54092.service: Deactivated successfully. Nov 12 20:55:59.527640 systemd-logind[1552]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:55:59.527719 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:55:59.528939 systemd-logind[1552]: Removed session 24. Nov 12 20:56:03.229722 kubelet[2741]: E1112 20:56:03.229646 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:03.229722 kubelet[2741]: E1112 20:56:03.229716 2741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 12 20:56:04.532262 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:54108.service - OpenSSH per-connection server daemon (10.0.0.1:54108). Nov 12 20:56:04.564934 sshd[5921]: Accepted publickey for core from 10.0.0.1 port 54108 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:04.566619 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:04.571160 systemd-logind[1552]: New session 25 of user core. Nov 12 20:56:04.584227 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:56:04.705360 sshd[5921]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:04.711561 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:54108.service: Deactivated successfully. Nov 12 20:56:04.714382 systemd-logind[1552]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:56:04.714480 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:56:04.716104 systemd-logind[1552]: Removed session 25. Nov 12 20:56:09.730190 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:33990.service - OpenSSH per-connection server daemon (10.0.0.1:33990). Nov 12 20:56:09.760930 sshd[5938]: Accepted publickey for core from 10.0.0.1 port 33990 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:09.762636 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:09.766848 systemd-logind[1552]: New session 26 of user core. Nov 12 20:56:09.781245 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:56:09.894491 sshd[5938]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:09.898722 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:33990.service: Deactivated successfully. Nov 12 20:56:09.902272 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:56:09.902607 systemd-logind[1552]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:56:09.903786 systemd-logind[1552]: Removed session 26. Nov 12 20:56:14.912450 systemd[1]: Started sshd@26-10.0.0.137:22-10.0.0.1:34000.service - OpenSSH per-connection server daemon (10.0.0.1:34000). Nov 12 20:56:14.943973 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 34000 ssh2: RSA SHA256:ff+1E3IxvymPzLNMRy6nd5oJGXfM6IAzu8KdPl3+w6U Nov 12 20:56:14.945856 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:14.950798 systemd-logind[1552]: New session 27 of user core. Nov 12 20:56:14.959554 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:56:15.087075 sshd[5954]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:15.091411 systemd[1]: sshd@26-10.0.0.137:22-10.0.0.1:34000.service: Deactivated successfully. Nov 12 20:56:15.094155 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:56:15.094948 systemd-logind[1552]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:56:15.095951 systemd-logind[1552]: Removed session 27.